id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
23363340
https://en.wikipedia.org/wiki/Huckleberry
Huckleberry
Huckleberry is a name used in North America for several plants in the family Ericaceae, in two closely related genera: Vaccinium and Gaylussacia. Nomenclature The name 'huckleberry' is a North American variation of the English dialectal name variously called 'hurtleberry' or 'whortleberry' () for the bilberry. In North America, the name was applied to numerous plant variations, all bearing small berries with colors that may be red, blue, or black. It is the common name for various Gaylussacia species, and some Vaccinium species, such as Vaccinium parvifolium, the red huckleberry, and is also applied to other Vaccinium species which may also be called blueberries depending upon local custom, as in New England and parts of Appalachia. Description The plant has shallow, radiating roots topped by a bush growing from an underground stem. The berries are small and round, in diameter, and look like large dark lowbush blueberries. Phytochemistry Two huckleberry species, V. membranaceum and V. ovatum, were studied for phytochemical content, showing that V. ovatum had greater total polyphenol and anthocyanin content than did V. membranaceum. Each species contained 15 anthocyanins (galactoside, glucoside, and arabinoside of delphinidin, cyanidin, petunidin, peonidin, and malvidin), but in different proportions. Taxonomy Gaylussacia Four species of huckleberries in the genus Gaylussacia are common in eastern North America, especially G. baccata, also known as the black huckleberry. Vaccinium From coastal Central California through Oregon to southern Washington and British Columbia, the red huckleberry (V. parvifolium) is found in the maritime-influenced plant community. In the Pacific Northwest and mountains of Montana and Idaho, this huckleberry species and several others, such as the black Vaccinium huckleberry (V. membranaceum) and blue (Cascade) huckleberry (V. deliciosum), grow in various habitats, such as mid-alpine regions up to above sea level, mountain slopes, forests, or lake basins. The plant grows best in damp, acidic soil having volcanic origin, attaining under optimal conditions heights of , usually ripening in mid-to-late summer or later at high elevations. Huckleberry was one of the few plant species to survive on the slopes of Mount St. Helens when the volcano erupted in 1980, and existed as a prominent mountain-slope bush in 2017. Where the climate is favorable, certain species of huckleberry, such as V. membranaceum, V. parvifolium and V. deliciosum, are used in ornamental plantings. The 'garden huckleberry' (Solanum scabrum) is not a true huckleberry, but is instead a member of the nightshade family. Distribution and habitat Huckleberry grows wild in northwestern United States and western Canada on subalpine slopes, forests, bogs, and lake basins. Uses Huckleberries were traditionally collected by Native American and First Nations people along the Pacific coast, interior British Columbia, Idaho, and Montana for use as food or traditional medicine. In taste, they may be tart or sweet. The fruit is versatile in foods or beverages, including jam, pudding, candy, pie, ice cream, muffins, pancakes, salad dressings, juice, tea, soup, and syrup. Attempts to cultivate huckleberry plants from seeds have failed, with plants devoid of fruits. This may be due to the inability of the plants to fully root and replicate the native soil chemistry of wild plants. In popular culture Huckleberries hold a place in archaic American English slang. The phrase "a huckleberry over my persimmon" was used to mean "a bit beyond my abilities". On the other hand, "I'm your huckleberry" is a way of expressing affection or that one is just the right person for a given role. The range of slang meanings of huckleberry in the 19th century was broad, also referring to significant or nice persons. The term can also be a slang expression for a rube or an amateur, or a mild expression of disapproval. Fictional characters including Huckleberry "Huck" Finn, from The Adventures of Tom Sawyer (1876) and Adventures of Huckleberry Finn (1884) by Mark Twain, and Huckleberry "Huck" Hound, an animated anthropomorphic Bluetick Coonhound created by Hanna-Barbera in 1958, have incorporated "huckleberry" into their names to indicate their rustic or insignificant nature. The huckleberry is the state fruit of Idaho and Montana. Country singer Toby Keith co-wrote a song with songwriter Chuck Cannon entitled "Huckleberry", about a primary school crush that turns into marriage later in life and they have three "little huckleberries" of their own, and is part of his album Unleashed (2002).
Biology and health sciences
Berries
Plants
23364086
https://en.wikipedia.org/wiki/Hertzsprung%E2%80%93Russell%20diagram
Hertzsprung–Russell diagram
The Hertzsprung–Russell diagram (abbreviated as H–R diagram, HR diagram or HRD) is a scatter plot of stars showing the relationship between the stars' absolute magnitudes or luminosities and their stellar classifications or effective temperatures. The diagram was created independently in 1911 by Ejnar Hertzsprung and by Henry Norris Russell in 1913, and represented a major step towards an understanding of stellar evolution. Historical background In the nineteenth century large-scale photographic spectroscopic surveys of stars were performed at Harvard College Observatory, producing spectral classifications for tens of thousands of stars, culminating ultimately in the Henry Draper Catalogue. In one segment of this work Antonia Maury included divisions of the stars by the width of their spectral lines. Hertzsprung noted that stars described with narrow lines tended to have smaller proper motions than the others of the same spectral classification. He took this as an indication of greater luminosity for the narrow-line stars, and computed secular parallaxes for several groups of these, allowing him to estimate their absolute magnitude. In 1910 Hans Oswald Rosenberg published a diagram plotting the apparent magnitude of stars in the Pleiades cluster against the strengths of the calcium K line and two hydrogen Balmer lines. These spectral lines serve as a proxy for the temperature of the star, an early form of spectral classification. The apparent magnitude of stars in the same cluster is equivalent to their absolute magnitude and so this early diagram was effectively a plot of luminosity against temperature. The same type of diagram is still used today as a means of showing the stars in clusters without having to initially know their distance and luminosity. Hertzsprung had already been working with this type of diagram, but his first publications showing it were not until 1911. This was also the form of the diagram using apparent magnitudes of a cluster of stars all at the same distance. Russell's early (1913) versions of the diagram included Maury's giant stars identified by Hertzsprung, those nearby stars with parallaxes measured at the time, stars from the Hyades (a nearby open cluster), and several moving groups, for which the moving cluster method could be used to derive distances and thereby obtain absolute magnitudes for those stars. Forms of diagram There are several forms of the Hertzsprung–Russell diagram, and the nomenclature is not very well defined. All forms share the same general layout: stars of greater luminosity are toward the top of the diagram, and stars with higher surface temperature are toward the left side of the diagram. The original diagram displayed the spectral type of stars on the horizontal axis and the absolute visual magnitude on the vertical axis. The spectral type is not a numerical quantity, but the sequence of spectral types is a monotonic series that reflects the stellar surface temperature. Modern observational versions of the chart replace spectral type by a color index (in diagrams made in the middle of the 20th Century, most often the B-V color) of the stars. This type of diagram is what is often called an observational Hertzsprung–Russell diagram, or specifically a color–magnitude diagram (CMD), and it is often used by observers. In cases where the stars are known to be at identical distances such as within a star cluster, a color–magnitude diagram is often used to describe the stars of the cluster with a plot in which the vertical axis is the apparent magnitude of the stars. For cluster members, by assumption there is a single additive constant difference between their apparent and absolute magnitudes, called the distance modulus, for all of that cluster of stars. Early studies of nearby open clusters (like the Hyades and Pleiades) by Hertzsprung and Rosenberg produced the first CMDs, a few years before Russell's influential synthesis of the diagram collecting data for all stars for which absolute magnitudes could be determined. Another form of the diagram plots the effective surface temperature of the star on one axis and the luminosity of the star on the other, almost invariably in a log-log plot. Theoretical calculations of stellar structure and the evolution of stars produce plots that match those from observations. This type of diagram could be called temperature-luminosity diagram, but this term is hardly ever used; when the distinction is made, this form is called the theoretical Hertzsprung–Russell diagram instead. A peculiar characteristic of this form of the H–R diagram is that the temperatures are plotted from high temperature to low temperature, which aids in comparing this form of the H–R diagram with the observational form. Although the two types of diagrams are similar, astronomers make a sharp distinction between the two. The reason for this distinction is that the exact transformation from one to the other is not trivial. To go between effective temperature and color requires a color–temperature relation, and constructing that is difficult; it is known to be a function of stellar composition and can be affected by other factors like stellar rotation. When converting luminosity or absolute bolometric magnitude to apparent or absolute visual magnitude, one requires a bolometric correction, which may or may not come from the same source as the color–temperature relation. One also needs to know the distance to the observed objects (i.e., the distance modulus) and the effects of interstellar obscuration, both in the color (reddening) and in the apparent magnitude (where the effect is called "extinction"). Color distortion (including reddening) and extinction (obscuration) are also apparent in stars having significant circumstellar dust. The ideal of direct comparison of theoretical predictions of stellar evolution to observations thus has additional uncertainties incurred in the conversions between theoretical quantities and observations. Interpretation Most of the stars occupy the region in the diagram along the line called the main sequence. During the stage of their lives in which stars are found on the main sequence line, they are fusing hydrogen in their cores. The next concentration of stars is on the horizontal branch (helium fusion in the core and hydrogen burning in a shell surrounding the core). Another prominent feature is the Hertzsprung gap located in the region between A5 and G0 spectral type and between +1 and −3 absolute magnitudes (i.e., between the top of the main sequence and the giants in the horizontal branch). RR Lyrae variable stars can be found in the left of this gap on a section of the diagram called the instability strip. Cepheid variables also fall on the instability strip, at higher luminosities. The H-R diagram can be used by scientists to roughly measure how far away a star cluster or galaxy is from Earth. This can be done by comparing the apparent magnitudes of the stars in the cluster to the absolute magnitudes of stars with known distances (or of model stars). The observed group is then shifted in the vertical direction, until the two main sequences overlap. The difference in magnitude that was bridged in order to match the two groups is called the distance modulus and is a direct measure for the distance (ignoring extinction). This technique is known as main sequence fitting and is a type of spectroscopic parallax. Not only the turn-off in the main sequence can be used, but also the tip of the red giant branch stars. The diagram seen by ESA's Gaia mission ESA's Gaia mission showed several features in the diagram that were either not known or that were suspected to exist. It found a gap in the main sequence that appears for M-dwarfs and that is explained with the transition from a partly convective core to a fully convective core. For white dwarfs the diagram shows several features. Two main concentrations appear in this diagram following the cooling sequence of white dwarfs that are explained with the atmospheric composition of white dwarfs, especially hydrogen versus helium dominated atmospheres of white dwarfs. A third concentration is explained with core crystallization of the white dwarfs interior. This releases energy and delays the cooling of white dwarfs. Role in the development of stellar physics Contemplation of the diagram led astronomers to speculate that it might demonstrate stellar evolution, the main suggestion being that stars collapsed from red giants to dwarf stars, then moving down along the line of the main sequence in the course of their lifetimes. Stars were thought therefore to radiate energy by converting gravitational energy into radiation through the Kelvin–Helmholtz mechanism. This mechanism resulted in an age for the Sun of only tens of millions of years, creating a conflict over the age of the Solar System between astronomers, and biologists and geologists who had evidence that the Earth was far older than that. This conflict was only resolved in the 1930s when nuclear fusion was identified as the source of stellar energy. Following Russell's presentation of the diagram to a meeting of the Royal Astronomical Society in 1912, Arthur Eddington was inspired to use it as a basis for developing ideas on stellar physics. In 1926, in his book The Internal Constitution of the Stars he explained the physics of how stars fit on the diagram. The paper anticipated the later discovery of nuclear fusion and correctly proposed that the star's source of power was the combination of hydrogen into helium, liberating enormous energy. This was a particularly remarkable intuitive leap, since at that time the source of a star's energy was still unknown, thermonuclear energy had not been proven to exist, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered. Eddington managed to sidestep this problem by concentrating on the thermodynamics of radiative transport of energy in stellar interiors. Eddington predicted that dwarf stars remain in an essentially static position on the main sequence for most of their lives. In the 1930s and 1940s, with an understanding of hydrogen fusion, came an evidence-backed theory of evolution to red giants following which were speculated cases of explosion and implosion of the remnants to white dwarfs. The term supernova nucleosynthesis is used to describe the creation of elements during the evolution and explosion of a pre-supernova star, a concept put forth by Fred Hoyle in 1954. The pure mathematical quantum mechanics and classical mechanical models of stellar processes enable the Hertzsprung–Russell diagram to be annotated with known conventional paths known as stellar sequences—there continue to be added rarer and more anomalous examples as more stars are analysed and mathematical models considered.
Physical sciences
Stellar astronomy
null
23366462
https://en.wikipedia.org/wiki/Insect
Insect
Insects (from Latin ) are hexapod invertebrates of the class Insecta. They are the largest group within the arthropod phylum. Insects have a chitinous exoskeleton, a three-part body (head, thorax and abdomen), three pairs of jointed legs, compound eyes, and a pair of antennae. Insects are the most diverse group of animals, with more than a million described species; they represent more than half of all animal species. The insect nervous system consists of a brain and a ventral nerve cord. Most insects reproduce by laying eggs. Insects breathe air through a system of paired openings along their sides, connected to small tubes that take air directly to the tissues. The blood therefore does not carry oxygen; it is only partly contained in vessels, and some circulates in an open hemocoel. Insect vision is mainly through their compound eyes, with additional small ocelli. Many insects can hear, using tympanal organs, which may be on the legs or other parts of the body. Their sense of smell is via receptors, usually on the antennae and the mouthparts. Nearly all insects hatch from eggs. Insect growth is constrained by the inelastic exoskeleton, so development involves a series of molts. The immature stages often differ from the adults in structure, habit and habitat. Groups that undergo four-stage metamorphosis often have a nearly immobile pupa. Insects that undergo three-stage metamorphosis lack a pupa, developing through a series of increasingly adult-like nymphal stages. The higher level relationship of the insects is unclear. Fossilized insects of enormous size have been found from the Paleozoic Era, including giant dragonfly-like insects with wingspans of . The most diverse insect groups appear to have coevolved with flowering plants. Adult insects typically move about by walking and flying; some can swim. Insects are the only invertebrates that can achieve sustained powered flight; insect flight evolved just once. Many insects are at least partly aquatic, and have larvae with gills; in some species, the adults too are aquatic. Some species, such as water striders, can walk on the surface of water. Insects are mostly solitary, but some, such as bees, ants and termites, are social and live in large, well-organized colonies. Others, such as earwigs, provide maternal care, guarding their eggs and young. Insects can communicate with each other in a variety of ways. Male moths can sense the pheromones of female moths over great distances. Other species communicate with sounds: crickets stridulate, or rub their wings together, to attract a mate and repel other males. Lampyrid beetles communicate with light. Humans regard many insects as pests, especially those that damage crops, and attempt to control them using insecticides and other techniques. Others are parasitic, and may act as vectors of diseases. Insect pollinators are essential to the reproduction of many flowering plants and so to their ecosystems. Many insects are ecologically beneficial as predators of pest insects, while a few provide direct economic benefit. Two species in particular are economically important and were domesticated many centuries ago: silkworms for silk and honey bees for honey. Insects are consumed as food in 80% of the world's nations, by people in roughly 3,000 ethnic groups. Human activities are having serious effects on insect biodiversity. Etymology The word insect comes from the Latin word from , "cut up", as insects appear to be cut into three parts. The Latin word was introduced by Pliny the Elder who calqued the Ancient Greek word éntomon "insect" (as in entomology) from éntomos "cut in pieces"; this was Aristotle's term for this class of life in his biology, also in reference to their notched bodies. The English word insect first appears in 1601 in Philemon Holland's translation of Pliny. Insects and other bugs Distinguishing features In common speech, insects and other terrestrial arthropods are often called bugs. Entomologists to some extent reserve the name "bugs" for a narrow category of "true bugs", insects of the order Hemiptera, such as cicadas and shield bugs. Other terrestrial arthropods, such as centipedes, millipedes, woodlice, spiders, mites and scorpions, are sometimes confused with insects, since they have a jointed exoskeleton. Adult insects are the only arthropods that ever have wings, with up to two pairs on the thorax. Whether winged or not, adult insects can be distinguished by their three-part body plan, with head, thorax, and abdomen; they have three pairs of legs on the thorax. Diversity Estimates of the total number of insect species vary considerably, suggesting that there are perhaps some 5.5 million insect species in existence, of which about one million have been described and named. These constitute around half of all eukaryote species, including animals, plants, and fungi. The most diverse insect orders are the Hemiptera (true bugs), Lepidoptera (butterflies and moths), Diptera (true flies), Hymenoptera (wasps, ants, and bees), and Coleoptera (beetles), each with more than 100,000 described species. Distribution and habitats Insects are distributed over every continent and almost every terrestrial habitat. There are many more species in the tropics, especially in rainforests, than in temperate zones. The world's regions have received widely differing amounts of attention from entomologists. The British Isles have been thoroughly surveyed, so that Gullan and Cranston 2014 state that the total of around 22,500 species is probably within 5% of the actual number there; they comment that Canada's list of 30,000 described species is surely over half of the actual total. They add that the 3,000 species of the American Arctic must be broadly accurate. In contrast, a large majority of the insect species of the tropics and the southern hemisphere are probably undescribed. Some 30–40,000 species inhabit freshwater; very few insects, perhaps a hundred species, are marine. Insects such as snow scorpionflies flourish in cold habitats including the Arctic and at high altitude. Insects such as desert locusts, ants, beetles, and termites are adapted to some of the hottest and driest environments on earth, such as the Sonoran Desert. Phylogeny and evolution External phylogeny Insects form a clade, a natural group with a common ancestor, among the arthropods. A phylogenetic analysis by Kjer et al. (2016) places the insects among the Hexapoda, six-legged animals with segmented bodies; their closest relatives are the Diplura (bristletails). Internal phylogeny The internal phylogeny is based on the works of Wipfler et al. 2019 for the Polyneoptera, Johnson et al. 2018 for the Paraneoptera, and Kjer et al. 2016 for the Holometabola. The numbers of described extant species (boldface for groups with over 100,000 species) are from Stork 2018. Taxonomy Early Aristotle was the first to describe the insects as a distinct group. He placed them as the second-lowest level of animals on his scala naturae, above the spontaneously generating sponges and worms, but below the hard-shelled marine snails. His classification remained in use for many centuries. In 1758, in his Systema Naturae, Carl Linnaeus divided the animal kingdom into six classes including Insecta. He created seven orders of insect according to the structure of their wings. These were the wingless Aptera, the two-winged Diptera, and five four-winged orders: the Coleoptera with fully-hardened forewings; the Hemiptera with partly-hardened forewings; the Lepidoptera with scaly wings; the Neuroptera with membranous wings but no sting; and the Hymenoptera, with membranous wings and a sting. Jean-Baptiste de Lamarck, in his 1809 Philosophie Zoologique, treated the insects as one of nine invertebrate phyla. In his 1817 Le Règne Animal, Georges Cuvier grouped all animals into four embranchements ("branches" with different body plans), one of which was the articulated animals, containing arthropods and annelids. This arrangement was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms, one of which was Metazoa for the multicellular animals. It had five phyla, including the articulates. Modern Traditional morphology-based systematics have usually given the Hexapoda the rank of superclass, and identified four groups within it: insects (Ectognatha), Collembola, Protura, and Diplura, the latter three being grouped together as the Entognatha on the basis of internalized mouth parts. The use of phylogenetic data has brought about numerous changes in relationships above the level of orders. Insects can be divided into two groups historically treated as subclasses: wingless insects or Apterygota, and winged insects or Pterygota. The Apterygota traditionally consisted of the primitively wingless orders Archaeognatha (jumping bristletails) and Zygentoma (silverfish). However, Apterygota is not monophyletic, as Archaeognatha are sister to all other insects, based on the arrangement of their mandibles, while the Pterygota, the winged insects, emerged from within the Dicondylia, alongside the Zygentoma. The Pterygota (Palaeoptera and Neoptera) are winged and have hardened plates on the outside of their body segments; the Neoptera have muscles that allow their wings to fold flat over the abdomen. Neoptera can be divided into groups with incomplete metamorphosis (Polyneoptera and Paraneoptera) and those with complete metamorphosis (Holometabola). The molecular finding that the traditional louse orders Mallophaga and Anoplura are within Psocoptera has led to the new taxon Psocodea. Phasmatodea and Embiidina have been suggested to form the Eukinolabia. Mantodea, Blattodea, and Isoptera form a monophyletic group, Dictyoptera. Fleas are now thought to be closely related to boreid mecopterans. Evolutionary history The oldest fossil that may be a primitive wingless insect is Leverhulmia from the Early Devonian Windyfield chert. The oldest known flying insects are from the mid-Carboniferous, around 328–324 million years ago. The group subsequently underwent a rapid explosive diversification. Claims that they originated substantially earlier, during the Silurian or Devonian (some 400 million years ago) based on molecular clock estimates, are unlikely to be correct, given the fossil record. Four large-scale radiations of insects have occurred: beetles (from about 300 million years ago), flies (from about 250 million years ago), moths and wasps (both from about 150 million years ago). The remarkably successful Hymenoptera (wasps, bees, and ants) appeared some 200 million years ago in the Triassic period, but achieved their wide diversity more recently in the Cenozoic era, which began 66 million years ago. Some highly successful insect groups evolved in conjunction with flowering plants, a powerful illustration of coevolution. Insects were among the earliest terrestrial herbivores and acted as major selection agents on plants. Plants evolved chemical defenses against this herbivory and the insects, in turn, evolved mechanisms to deal with plant toxins. Many insects make use of these toxins to protect themselves from their predators. Such insects often advertise their toxicity using warning colors. Morphology and physiology External Three-part body Insects have a segmented body supported by an exoskeleton, the hard outer covering made mostly of chitin. The body is organized into three interconnected units: the head, thorax and abdomen. The head supports a pair of sensory antennae, a pair of compound eyes, zero to three simple eyes (or ocelli) and three sets of variously modified appendages that form the mouthparts. The thorax carries the three pairs of legs and up to two pairs of wings. The abdomen contains most of the digestive, respiratory, excretory and reproductive structures. Segmentation The head is enclosed in a hard, heavily sclerotized, unsegmented head capsule, which contains most of the sensing organs, including the antennae, compound eyes, ocelli, and mouthparts. The thorax is composed of three sections named (from front to back) the prothorax, mesothorax and metathorax. The prothorax carries the first pair of legs. The mesothorax carries the second pair of legs and the front wings. The metathorax carries the third pair of legs and the hind wings. The abdomen is the largest part of the insect, typically with 11–12 segments, and is less strongly sclerotized than the head or thorax. Each segment of the abdomen has sclerotized upper and lower plates (the tergum and sternum), connected to adjacent sclerotized parts by membranes. Each segment carries a pair of spiracles. Exoskeleton The outer skeleton, the cuticle, is made up of two layers: the epicuticle, a thin and waxy water-resistant outer layer without chitin, and a lower layer, the thick chitinous procuticle. The procuticle has two layers: an outer exocuticle and an inner endocuticle. The tough and flexible endocuticle is built from numerous layers of fibrous chitin and proteins, criss-crossing each other in a sandwich pattern, while the exocuticle is rigid and sclerotized. As an adaptation to life on land, insects have an enzyme that uses atmospheric oxygen to harden their cuticle, unlike crustaceans which use heavy calcium compounds for the same purpose. This makes the insect exoskeleton a lightweight material. Internal systems Nervous The nervous system of an insect consists of a brain and a ventral nerve cord. The head capsule is made up of six fused segments, each with either a pair of ganglia, or a cluster of nerve cells outside of the brain. The first three pairs of ganglia are fused into the brain, while the three following pairs are fused into a structure of three pairs of ganglia under the insect's esophagus, called the subesophageal ganglion. The thoracic segments have one ganglion on each side, connected into a pair per segment. This arrangement is also seen in the first eight segments of the abdomen. Many insects have fewer ganglia than this. Insects are capable of learning. Digestive An insect uses its digestive system to extract nutrients and other substances from the food it consumes. There is extensive variation among different orders, life stages, and even castes in the digestive system of insects. The gut runs lengthwise through the body. It has three sections, with paired salivary glands and salivary reservoirs. By moving its mouthparts the insect mixes its food with saliva. Some insects, like flies, expel digestive enzymes onto their food to break it down, but most insects digest their food in the gut. The foregut is lined with cuticule as protection from tough food. It includes the mouth, pharynx, and crop which stores food. Digestion starts in the mouth with enzymes in the saliva. Strong muscles in the pharynx pump fluid into the mouth, lubricating the food, and enabling certain insects to feed on blood or from the xylem and phloem transport vessels of plants. Once food leaves the crop, it passes to the midgut, where the majority of digestion takes place. Microscopic projections, microvilli, increase the surface area of the wall to absorb nutrients. In the hindgut, undigested food particles are joined by uric acid to form fecal pellets; most of the water is absorbed, leaving a dry pellet to be eliminated. Insects may have one to hundreds of Malpighian tubules. These remove nitrogenous wastes from the hemolymph of the insect and regulate osmotic balance. Wastes and solutes are emptied directly into the alimentary canal, at the junction between the midgut and hindgut. Reproductive The reproductive system of female insects consist of a pair of ovaries, accessory glands, one or more spermathecae to store sperm, and ducts connecting these parts. The ovaries are made up of a variable number of egg tubes, ovarioles. Female insects make eggs, receive and store sperm, manipulate sperm from different males, and lay eggs. Accessory glands produce substances to maintain sperm and to protect the eggs. They can produce glue and protective substances for coating eggs, or tough coverings for a batch of eggs called oothecae. For males, the reproductive system consists of one or two testes, suspended in the body cavity by tracheae. The testes contain sperm tubes or follicles in a membranous sac. These connect to a duct that leads to the outside. The terminal portion of the duct may be sclerotized to form the intromittent organ, the aedeagus. Respiratory Insect respiration is accomplished without lungs. Instead, insects have a system of internal tubes and sacs through which gases either diffuse or are actively pumped, delivering oxygen directly to tissues that need it via their tracheae and tracheoles. In most insects, air is taken in through paired spiracles, openings on the sides of the abdomen and thorax. The respiratory system limits the size of insects. As insects get larger, gas exchange via spiracles becomes less efficient, and thus the heaviest insect currently weighs less than 100 g. However, with increased atmospheric oxygen levels, as were present in the late Paleozoic, larger insects were possible, such as dragonflies with wingspans of more than . Gas exchange patterns in insects range from continuous and diffusive ventilation, to discontinuous. Circulatory Because oxygen is delivered directly to tissues via tracheoles, the circulatory system is not used to carry oxygen, and is therefore greatly reduced. The insect circulatory system is open; it has no veins or arteries, and instead consists of little more than a single, perforated dorsal tube that pulses peristaltically. This dorsal blood vessel is divided into two sections: the heart and aorta. The dorsal blood vessel circulates the hemolymph, arthropods' fluid analog of blood, from the rear of the body cavity forward. Hemolymph is composed of plasma in which hemocytes are suspended. Nutrients, hormones, wastes, and other substances are transported throughout the insect body in the hemolymph. Hemocytes include many types of cells that are important for immune responses, wound healing, and other functions. Hemolymph pressure may be increased by muscle contractions or by swallowing air into the digestive system to aid in molting. Sensory Many insects possess numerous specialized sensory organs able to detect stimuli including limb position (proprioception) by campaniform sensilla, light, water, chemicals (senses of taste and smell), sound, and heat. Some insects such as bees can perceive ultraviolet wavelengths, or detect polarized light, while the antennae of male moths can detect the pheromones of female moths over distances of over a kilometer. There is a trade-off between visual acuity and chemical or tactile acuity, such that most insects with well-developed eyes have reduced or simple antennae, and vice versa. Insects perceive sound by different mechanisms, such as thin vibrating membranes (tympana). Insects were the earliest organisms to produce and sense sounds. Hearing has evolved independently at least 19 times in different insect groups. Most insects, except some cave crickets, are able to perceive light and dark. Many have acute vision capable of detecting small and rapid movements. The eyes may include simple eyes or ocelli as well as larger compound eyes. Many species can detect light in the infrared, ultraviolet and visible light wavelengths, with color vision. Phylogenetic analysis suggests that UV-green-blue trichromacy existed from at least the Devonian period, some 400 million years ago. The individual lenses in compound eyes are immobile, but fruit flies have photoreceptor cells underneath each lens which move rapidly in and out of focus, in a series of movements called photoreceptor microsaccades. This gives them, and possibly many other insects, a much clearer image of the world than previously assumed. An insect's sense of smell is via chemical receptors, usually on the antennae and the mouthparts. These detect both airborne volatile compounds and odorants on surfaces, including pheromones from other insects and compounds released by food plants. Insects use olfaction to locate mating partners, food, and places to lay eggs, and to avoid predators. It is thus an extremely important sense, enabling insects to discriminate between thousands of volatile compounds. Some insects are capable of magnetoreception; ants and bees navigate using it both locally (near their nests) and when migrating. The Brazilian stingless bee detects magnetic fields using the hair-like sensilla on its antennae. Reproduction and development Life-cycles The majority of insects hatch from eggs. The fertilization and development takes place inside the egg, enclosed by a shell (chorion) that consists of maternal tissue. In contrast to eggs of other arthropods, most insect eggs are drought resistant. This is because inside the chorion two additional membranes develop from embryonic tissue, the amnion and the serosa. This serosa secretes a cuticle rich in chitin that protects the embryo against desiccation. Some species of insects, like aphids and tsetse flies, are ovoviviparous: their eggs develop entirely inside the female, and then hatch immediately upon being laid. Some other species, such as in the cockroach genus Diploptera, are viviparous, gestating inside the mother and born alive. Some insects, like parasitoid wasps, are polyembryonic, meaning that a single fertilized egg divides into many separate embryos. Insects may be univoltine, bivoltine or multivoltine, having one, two or many broods in a year. Other developmental and reproductive variations include haplodiploidy, polymorphism, paedomorphosis or peramorphosis, sexual dimorphism, parthenogenesis, and more rarely hermaphroditism. In haplodiploidy, which is a type of sex-determination system, the offspring's sex is determined by the number of sets of chromosomes an individual receives. This system is typical in bees and wasps. Some insects are parthenogenetic, meaning that the female can reproduce and give birth without having the eggs fertilized by a male. Many aphids undergo a cyclical form of parthenogenesis in which they alternate between one or many generations of asexual and sexual reproduction. In summer, aphids are generally female and parthenogenetic; in the autumn, males may be produced for sexual reproduction. Other insects produced by parthenogenesis are bees, wasps and ants; in their haplodiploid system, diploid females spawn many females and a few haploid males. Metamorphosis Metamorphosis in insects is the process of development that converts young to adults. There are two forms of metamorphosis: incomplete and complete. Incomplete Hemimetabolous insects, those with incomplete metamorphosis, change gradually after hatching from the egg by undergoing a series of molts through stages called instars, until the final, adult, stage is reached. An insect molts when it outgrows its exoskeleton, which does not stretch and would otherwise restrict the insect's growth. The molting process begins as the insect's epidermis secretes a new epicuticle inside the old one. After this new epicuticle is secreted, the epidermis releases a mixture of enzymes that digests the endocuticle and thus detaches the old cuticle. When this stage is complete, the insect makes its body swell by taking in a large quantity of water or air; this makes the old cuticle split along predefined weaknesses where it was thinnest. Complete Holometabolism, or complete metamorphosis, is where the insect changes in four stages, an egg or embryo, a larva, a pupa and the adult or imago. In these species, an egg hatches to produce a larva, which is generally worm-like in form. This can be eruciform (caterpillar-like), scarabaeiform (grub-like), campodeiform (elongated, flattened and active), elateriform (wireworm-like) or vermiform (maggot-like). The larva grows and eventually becomes a pupa, a stage marked by reduced movement. There are three types of pupae: obtect, exarate or coarctate. Obtect pupae are compact, with the legs and other appendages enclosed. Exarate pupae have their legs and other appendages free and extended. Coarctate pupae develop inside the larval skin. Insects undergo considerable change in form during the pupal stage, and emerge as adults. Butterflies are well-known for undergoing complete metamorphosis; most insects use this life cycle. Some insects have evolved this system to hypermetamorphosis. Complete metamorphosis is a trait of the most diverse insect group, the Endopterygota. Communication Insects that produce sound can generally hear it. Most insects can hear only a narrow range of frequencies related to the frequency of the sounds they can produce. Mosquitoes can hear up to 2 kilohertz. Certain predatory and parasitic insects can detect the characteristic sounds made by their prey or hosts, respectively. Likewise, some nocturnal moths can perceive the ultrasonic emissions of bats, which helps them avoid predation. Light production A few insects, such as Mycetophilidae (Diptera) and the beetle families Lampyridae, Phengodidae, Elateridae and Staphylinidae are bioluminescent. The most familiar group are the fireflies, beetles of the family Lampyridae. Some species are able to control this light generation to produce flashes. The function varies with some species using them to attract mates, while others use them to lure prey. Cave dwelling larvae of Arachnocampa (Mycetophilidae, fungus gnats) glow to lure small flying insects into sticky strands of silk. Some fireflies of the genus Photuris mimic the flashing of female Photinus species to attract males of that species, which are then captured and devoured. The colors of emitted light vary from dull blue (Orfelia fultoni, Mycetophilidae) to the familiar greens and the rare reds (Phrixothrix tiemanni, Phengodidae). Sound production Insects make sounds mostly by mechanical action of appendages. In grasshoppers and crickets, this is achieved by stridulation. Cicadas make the loudest sounds among the insects by producing and amplifying sounds with special modifications to their body to form tymbals and associated musculature. The African cicada Brevisana brevis has been measured at 106.7 decibels at a distance of . Some insects, such as the Helicoverpa zea moths, hawk moths and Hedylid butterflies, can hear ultrasound and take evasive action when they sense that they have been detected by bats. Some moths produce ultrasonic clicks that warn predatory bats of their unpalatability (acoustic aposematism), while some palatable moths have evolved to mimic these calls (acoustic Batesian mimicry). The claim that some moths can jam bat sonar has been revisited. Ultrasonic recording and high-speed infrared videography of bat-moth interactions suggest the palatable tiger moth really does defend against attacking big brown bats using ultrasonic clicks that jam bat sonar. Very low sounds are produced in various species of Coleoptera, Hymenoptera, Lepidoptera, Mantodea and Neuroptera. These low sounds are produced by the insect's movement, amplified by stridulatory structures on the insect's muscles and joints; these sounds can be used to warn or communicate with other insects. Most sound-making insects also have tympanal organs that can perceive airborne sounds. Some hemipterans, such as the water boatmen, communicate via underwater sounds. Communication using surface-borne vibrational signals is more widespread among insects because of size constraints in producing air-borne sounds. Insects cannot effectively produce low-frequency sounds, and high-frequency sounds tend to disperse more in a dense environment (such as foliage), so insects living in such environments communicate primarily using substrate-borne vibrations. Some species use vibrations for communicating, such as to attract mates as in the songs of the shield bug Nezara viridula. Vibrations can also be used to communicate between species; lycaenid caterpillars, which form a mutualistic association with ants communicate with ants in this way. The Madagascar hissing cockroach has the ability to press air through its spiracles to make a hissing noise as a sign of aggression; the death's-head hawkmoth makes a squeaking noise by forcing air out of their pharynx when agitated, which may also reduce aggressive worker honey bee behavior when the two are close. Chemical communication Many insects have evolved chemical means for communication. These semiochemicals are often derived from plant metabolites including those meant to attract, repel and provide other kinds of information. Pheromones are used for attracting mates of the opposite sex, for aggregating conspecific individuals of both sexes, for deterring other individuals from approaching, to mark a trail, and to trigger aggression in nearby individuals. Allomones benefit their producer by the effect they have upon the receiver. Kairomones benefit their receiver instead of their producer. Synomones benefit the producer and the receiver. While some chemicals are targeted at individuals of the same species, others are used for communication across species. The use of scents is especially well-developed in social insects. are nonstructural materials produced and secreted to the cuticle surface to fight desiccation and pathogens. They are important, too, as pheromones, especially in social insects. Social behavior Social insects, such as termites, ants and many bees and wasps, are eusocial. They live together in such large well-organized colonies of genetically similar individuals that they are sometimes considered superorganisms. In particular, reproduction is largely limited to a queen caste; other females are workers, prevented from reproducing by worker policing. Honey bees have evolved a system of abstract symbolic communication where a behavior is used to represent and convey specific information about the environment. In this communication system, called dance language, the angle at which a bee dances represents a direction relative to the sun, and the length of the dance represents the distance to be flown. Bumblebees too have some social communication behaviors. Bombus terrestris, for example, more rapidly learns about visiting unfamiliar, yet rewarding flowers, when they can see a conspecific foraging on the same species. Only insects that live in nests or colonies possess fine-scale spatial orientation. Some can navigate unerringly to a single hole a few millimeters in diameter among thousands of similar holes, after a trip of several kilometers. In philopatry, insects that hibernate are able to recall a specific location up to a year after last viewing the area of interest. A few insects seasonally migrate large distances between different geographic regions, as in the continent-wide monarch butterfly migration. Care of young Eusocial insects build nests, guard eggs, and provide food for offspring full-time. Most insects, however, lead short lives as adults, and rarely interact with one another except to mate or compete for mates. A small number provide parental care, where they at least guard their eggs, and sometimes guard their offspring until adulthood, possibly even feeding them. Many wasps and bees construct a nest or burrow, store provisions in it, and lay an egg upon those provisions, providing no further care. Locomotion Flight Insects are the only group of invertebrates to have developed flight. The ancient groups of insects in the Palaeoptera, the dragonflies, damselflies and mayflies, operate their wings directly by paired muscles attached to points on each wing base that raise and lower them. This can only be done at a relatively slow rate. All other insects, the Neoptera, have indirect flight, in which the flight muscles cause rapid oscillation of the thorax: there can be more wingbeats than nerve impulses commanding the muscles. One pair of flight muscles is aligned vertically, contracting to pull the top of the thorax down, and the wings up. The other pair runs longitudinally, contracting to force the top of the thorax up and the wings down. Most insects gain aerodynamic lift by creating a spiralling vortex at the leading edge of the wings. Small insects like thrips with tiny feathery wings gain lift using the clap and fling mechanism; the wings are clapped together and pulled apart, flinging vortices into the air at the leading edges and at the wingtips. The evolution of insect wings has been a subject of debate; it has been suggested they came from modified gills, flaps on the spiracles, or an appendage, the epicoxa, at the base of the legs. More recently, entomologists have favored evolution of wings from lobes of the notum, of the pleuron, or more likely both. In the Carboniferous age, the dragonfly-like Meganeura had as much as a wide wingspan. The appearance of gigantic insects is consistent with high atmospheric oxygen at that time, as the respiratory system of insects constrains their size. The largest flying insects today are much smaller, with the largest wingspan belonging to the white witch moth (Thysania agrippina), at approximately . Unlike birds, small insects are swept along by the prevailing winds although many larger insects migrate. Aphids are transported long distances by low-level jet streams. Walking Many adult insects use six legs for walking, with an alternating tripod gait. This allows for rapid walking with a stable stance; it has been studied extensively in cockroaches and ants. For the first step, the middle right leg and the front and rear left legs are in contact with the ground and move the insect forward, while the front and rear right leg and the middle left leg are lifted and moved forward to a new position. When they touch the ground to form a new stable triangle, the other legs can be lifted and brought forward in turn. The purest form of the tripedal gait is seen in insects moving at high speeds. However, this type of locomotion is not rigid and insects can adapt a variety of gaits. For example, when moving slowly, turning, avoiding obstacles, climbing or slippery surfaces, four (tetrapodal) or more feet (wave-gait) may be touching the ground. Cockroaches are among the fastest insect runners and, at full speed, adopt a bipedal run. More sedate locomotion is seen in the well-camouflaged stick insects (Phasmatodea). A small number of species such as Water striders can move on the surface of water; their claws are recessed in a special groove, preventing the claws from piercing the water's surface film. The ocean-skaters in the genus Halobates even live on the surface of open oceans, a habitat that has few insect species. Swimming A large number of insects live either part or the whole of their lives underwater. In many of the more primitive orders of insect, the immature stages are aquatic. In some groups, such as water beetles, the adults too are aquatic. Many of these species are adapted for under-water locomotion. Water beetles and water bugs have legs adapted into paddle-like structures. Dragonfly naiads use jet propulsion, forcibly expelling water out of their rectal chamber. Other insects such as the rove beetle Stenus emit pygidial gland surfactant secretions that reduce surface tension; this enables them to move on the surface of water by Marangoni propulsion. Ecology Insects play many critical roles in ecosystems, including soil turning and aeration, dung burial, pest control, pollination and wildlife nutrition. For instance, termites modify the environment around their nests, encouraging grass growth; many beetles are scavengers; dung beetles recycle biological materials into forms useful to other organisms. Insects are responsible for much of the process by which topsoil is created. Defense Insects are mostly small, soft bodied, and fragile compared to larger lifeforms. The immature stages are small, move slowly or are immobile, and so all stages are exposed to predation and parasitism. Insects accordingly employ multiple defensive strategies, including camouflage, mimicry, toxicity and active defense. Many insects rely on camouflage to avoid being noticed by their predators or prey. It is common among leaf beetles and weevils that feed on wood or vegetation. Stick insects mimic the forms of sticks and leaves. Many insects use mimicry to deceive predators into avoiding them. In Batesian mimicry, edible species, such as of hoverflies (the mimics), gain a survival advantage by resembling inedible species (the models). In Müllerian mimicry, inedible species, such as of wasps and bees, resemble each other so as to reduce the sampling rate by predators who need to learn that those insects are inedible. Heliconius butterflies, many of which are toxic, form Müllerian complexes, advertising their inedibility. Chemical defense is common among Coleoptera and Lepidoptera, usually being advertised by bright warning colors (aposematism), as in the monarch butterfly. As larvae, they obtain their toxicity by sequestering chemicals from the plants they eat into their own tissues. Some manufacture their own toxins. Predators that eat poisonous butterflies and moths may vomit violently, learning not to eat insects with similar markings; this is the basis of Müllerian mimicry. Some ground beetles of the family Carabidae actively defend themselves, spraying chemicals from their abdomen with great accuracy, to repel predators. Pollination Pollination is the process by which pollen is transferred in the reproduction of plants, thereby enabling fertilisation and sexual reproduction. Most flowering plants require an animal to do the transportation. The majority of pollination is by insects. Because insects usually receive benefit for the pollination in the form of energy rich nectar it is a mutualism. The various flower traits, such as bright colors and pheromones that coevolved with their pollinators, have been called pollination syndromes, though around one third of flowers cannot be assigned to a single syndrome. Parasitism Many insects are parasitic. The largest group, with over 100,000 species and perhaps over a million, consists of a single clade of parasitoid wasps among the Hymenoptera. These are parasites of other insects, eventually killing their hosts. Some are hyper-parasites, as their hosts are other parasitoid wasps. Several groups of insects can be considered as either micropredators or external parasites; for example, many hemipteran bugs have piercing and sucking mouthparts, adapted for feeding on plant sap, while species in groups such as fleas, lice, and mosquitoes are hematophagous, feeding on the blood of animals. Relationship to humans As pests Many insects are considered pests by humans. These include parasites of people and livestock, such as lice and bed bugs; mosquitoes act as vectors of several diseases. Other pests include insects like termites that damage wooden structures; herbivorous insects such as locusts, aphids, and thrips that destroy agricultural crops, or like wheat weevils damage stored agricultural produce. Farmers have often attempted to control insects with chemical insecticides, but increasingly rely on biological pest control. This uses one organism to reduce the population density of a pest organism; it is a key element of integrated pest management. Biological control is favored because insecticides can cause harm to ecosystems far beyond the intended pest targets. In beneficial roles Pollination of flowering plants by insects including bees, butterflies, flies, and beetles, is economically important. The value of insect pollination of crops and fruit trees was estimated in 2021 to be about $34 billion in the US alone. Insects produce useful substances such as honey, wax, lacquer and silk. Honey bees have been cultured by humans for thousands of years for honey. Beekeeping in pottery vessels began about 9,000 years ago in North Africa. The silkworm has greatly affected human history, as silk-driven trade established relationships between China and the rest of the world. Insects that feed on or parasitise other insects are beneficial to humans if they thereby reduce damage to agriculture and human structures. For example, aphids feed on crops, causing economic loss, but ladybugs feed on aphids, and can be used to control them. Insects account for the vast majority of insect consumption. Fly larvae (maggots) were formerly used to treat wounds to prevent or stop gangrene, as they would only consume dead flesh. This treatment is finding modern usage in some hospitals. Insects have gained attention as potential sources of drugs and other medicinal substances. Adult insects, such as crickets and insect larvae of various kinds, are commonly used as fishing bait. Population declines At least 66 insect species extinctions have been recorded since 1500, many of them on oceanic islands. Declines in insect abundance have been attributed to human activity in the form of artificial lighting, land use changes such as urbanization or farming, pesticide use, and invasive species. A 2019 research review suggested that a large proportion of insect species is threatened with extinction in the 21st century, though the details have been disputed. A larger 2020 meta-study, analyzing data from 166 long-term surveys, suggested that populations of terrestrial insects are indeed decreasing rapidly, by about 9% per decade. In research Insects play important roles in biological research. For example, because of its small size, short generation time and high fecundity, the common fruit fly Drosophila melanogaster is a model organism for studies in the genetics of eukaryotes, including genetic linkage, interactions between genes, chromosomal genetics, development, behavior and evolution. Because genetic systems are well conserved among eukaryotes, understanding basic cellular processes like DNA replication or transcription in fruit flies can help to understand those processes in other eukaryotes, including humans. The genome of D. melanogaster was sequenced in 2000, reflecting the organism's important role in biological research. It was found that 70% of the fly genome is similar to the human genome, supporting the theory of evolution. As food Insects are consumed as food in 80% of the world's nations, by people in roughly 3,000 ethnic groups. In Africa, locally abundant species of locusts and termites are a common traditional human food source. Some, especially deep-fried cicadas, are considered to be delicacies. Insects have a high protein content for their mass, and some authors suggest their potential as a major source of protein in human nutrition. In most first-world countries, however, entomophagy (the eating of insects), is taboo. They are also recommended by armed forces as a survival food for troops in adversity. Because of the abundance of insects and a worldwide concern of food shortages, the Food and Agriculture Organization of the United Nations considers that people throughout the world may have to eat insects as a food staple. Insects are noted for their nutrients, having a high content of protein, minerals and fats and are already regularly eaten by one-third of the world's population. In other products Black soldier fly larvae can provide protein and fats for use in cosmetics. Insect cooking oil, insect butter and fatty alcohols can be made from such insects as the superworm (Zophobas morio). Insect species including the black soldier fly or the housefly in their maggot forms, and beetle larvae such as mealworms, can be processed and used as feed for farmed animals including chicken, fish and pigs. Many species of insects are sold and kept as pets. In religion and folklore Scarab beetles held religious and cultural symbolism in ancient Egypt, Greece and some shamanistic Old World cultures. The ancient Chinese regarded cicadas as symbols of rebirth or immortality. In Mesopotamian literature, the epic poem of Gilgamesh has allusions to Odonata that signify the impossibility of immortality. Among the Aborigines of Australia of the Arrernte language groups, honey ants and witchetty grubs served as personal clan totems. In the case of the 'San' bush-men of the Kalahari, it is the praying mantis that holds much cultural significance including creation and zen-like patience in waiting.
Biology and health sciences
Biology
null
23366955
https://en.wikipedia.org/wiki/Input%20device
Input device
In computing, an input device is a piece of equipment used to provide data and control signals to an information processing system, such as a computer or information appliance. Examples of input devices include keyboards, computer mice, scanners, cameras, joysticks, and microphones. Input devices can be categorized based on: modality of output (e.g., mechanical motion, audio, visual, etc.) whether the output is discrete (e.g., pressing of key) or continuous (e.g., a mouse's position, though digitized into a discrete quantity, is fast enough to be considered continuous) the number of degrees of freedom involved (e.g., two-dimensional traditional mice, or three-dimensional navigators designed for CAD applications) Keyboard A keyboard is a human interface device which is represented as a matrix of buttons. Each button, or key, can be used to either input an alphanumeric character to a computer, or to call upon a particular function of the computer. It acts as the main text entry interface for most users. Types Keyboards are available in many form factors, depending on the use case. Standard keyboards can be categorized by its size and number of keys, and the type of switch it employs. Other keyboards cater to specific use cases, such as a numeric keypad or a keyer. Desktop keyboards are typically large, often have full key travel distance, and features such as multimedia keys and a numeric keypad. Keyboards on laptops and tablets typically compromise on comfort to achieve a thin figure. There are various switch technologies used in modern keyboards, such as mechanical switches (which use springs), scissor switches (usually found on a laptop keyboard), or a membrane. Other keyboards do not have physical keys, such as a virtual keyboard, or a projection keyboard. Ergonomic keyboard A keyboard placing design emphasis on ergonomics and comfort. Chorded keyboard A keyboard used by pressing several keys together. Thumb keyboard A miniature keyboard found in PDAs and mobile phones. Keyer A chorded keyboard without the board. Numeric keypad While some keyboards include one (commonly found on the right side), numeric keypads can be found as independent devices. Pointing device A pointing device allows a user to input spatial data to a computer. It is commonly used as a simple and intuitive way to select items on a computer screen on a graphical user interface (GUI), either by moving a mouse pointer, or, in the case of a touch screen, by physically touching the item on screen. Common pointing devices include mice, touchpads, and touch screens. Whereas mice operate by detecting their displacement on a surface, analog devices, such as 3D mice, joysticks, or pointing sticks, function by reporting their angle of deflection. Types Pointing devices can be classified on: Whether the input is direct or indirect. With direct input, the input space coincides with the display space, i.e. pointing is done in the space where visual feedback or the pointer appears. Touchscreens and light pens involve direct input. Examples involving indirect input include the mouse and trackball. Whether the positional information is absolute (e.g. on a touch screen) or relative (e.g., with a mouse that can be lifted and repositioned) Direct input is almost necessarily absolute, but indirect input may be either absolute or relative. For example, digitizing graphics tablets that do not have an embedded screen involve indirect input and sense absolute positions and are often run in an absolute input mode, but they may also be set up to simulate a relative input mode like that of a touchpad, where the stylus or puck can be lifted and repositioned. Embedded LCD tablets, which are also referred to as graphics tablet monitors, are the extension of digitizing graphics tablets. They enable users to see the real-time positions via the screen while being used. mouse A hand-held pointing device that is moved across a surface. touchpad or trackpad A flat surface operated by moving a finger across its surface. touch screen A layer placed over a computer screen, used by physically touching it with one's finger or a stylus. trackball Similar to a mouse, a trackball has a ball held by a socket. Instead of moving the mouse, the user rolls the ball with their finger. graphics tablet, digitizer, or drawing tablet A flat surface on which a stylus is used, often to draw images or capture signatures. Sensors A sensor is an input device which produces data based on physical properties. Sensors are commonly found in mobile devices to detect their physical orientation and acceleration, but may also be found in desktop computers in the form of a thermometer used to monitor system temperature. Types Accelerometer Detects acceleration. Gyroscope Detects spatial orientation. Magnetometer Similar to a compass, a magnetometer senses magnetic heading. Proximity sensor Detects whether an object is in proximity. Barometer Measures atmospheric pressure. May be used to determine elevation above sea level. Ultrasonic transducer Detects movement and range of objects using ultrasound. LIDAR Detects the range of objects using laser. Thermometer Measures temperature. Usually uses a thermistor or thermocouple. Some sensors can be built with MEMS, which allows them to be microscopic in size. High-degree of freedom input devices Some devices allow many continuous degrees of freedom as input. These can be used as pointing devices, but are generally used in ways that don't involve pointing to a location in space, such as the control of a camera angle while in 3D applications. These kinds of devices are typically used in virtual reality systems (CAVEs), where input that registers six degrees of freedom is required. Composite devices Input devices, such as buttons and joysticks, can be combined on a single physical device that could be thought of as a composite device. Many gaming devices have controllers like this. Technically mice are composite devices, as they both track movement and provide buttons for clicking, but composite devices are generally considered to have more than two different forms of input. Examples Joystick Consists of a stick pivoting on a stationary base. Gamepad, or joypad Hand held device often used to play modern video games. Paddle A paddle could be a game controller consisting of a dial and a button, or an input device such as a Griffin PowerMate or a Microsoft Surface Dial. Racing wheel An imitation steering wheel that can be used to play racing video games. Wii Remote A remote control used with the Nintendo Wii video game console which integrates an accelerometer and pointing capabilities. Video input devices Video input devices are used to digitize images or video from the outside world into the computer. The information can be stored in a multitude of formats depending on the user's requirement. Many video input devices use a camera sensor. Types Digital camera Digital camcorder Portable media player Webcam Microsoft Kinect Sensor Image scanner Fingerprint scanner Barcode reader 3D scanner Laser rangefinder Eye gaze tracker Voice recorder Voice input devices are used to capture sound. In some cases, an audio output device can be used as an input device, in order to capture produced sound. Audio input devices allow a user to send audio info to a computer for processing, recording, or carrying out commands. Devices such as microphones allow users to speak to the computer in order to record a voice message or navigate software. Aside from recording, audio input devices are also used with speech recognition software. Types Microphones MIDI keyboard or other digital musical instrument Punched paper Punched cards and punched tapes were used often in the 20th century. A punched hole represented a one; its absence represented a zero. A mechanical or optical reader was used to input a punched card or tape. Other types Gesture recognition Digital pen Magnetic ink character recognition Sip-and-puff#Computer input device
Technology
Computer hardware
null
26244469
https://en.wikipedia.org/wiki/Recombination%20%28cosmology%29
Recombination (cosmology)
In cosmology, recombination refers to the epoch during which charged electrons and protons first became bound to form electrically neutral hydrogen atoms. Recombination occurred about years after the Big Bang (at a redshift of z = ). The word "recombination" is misleading, since the Big Bang theory does not posit that protons and electrons had been combined before, but the name exists for historical reasons since it was named before the Big Bang hypothesis became the primary theory of the birth of the universe. Overview Immediately after the Big Bang, the universe was a hot, dense plasma of photons, leptons, and quarks: the quark epoch. At 10−6 seconds, the Universe had expanded and cooled sufficiently to allow for the formation of protons: the hadron epoch. This plasma was effectively opaque to electromagnetic radiation due to Thomson scattering by free electrons, as the mean free path each photon could travel before encountering an electron was very short. This is the current state of the interior of the Sun. As the universe expanded, it also cooled. Eventually, the universe cooled to the point that the radiation field could not immediately ionize neutral hydrogen, and atoms became energetically favored. The fraction of free electrons and protons as compared to neutral hydrogen decreased to a few parts in . Recombination involves electrons binding to protons (hydrogen nuclei) to form neutral hydrogen atoms. Because direct recombinations to the ground state (lowest energy) of hydrogen are very inefficient, these hydrogen atoms generally form with the electrons in a high energy state, and the electrons quickly transition to their low energy state by emitting photons. Two main pathways exist: from the 2p state by emitting a Lyman-a photon – these photons will almost always be reabsorbed by another hydrogen atom in its ground state – or from the 2s state by emitting two photons, which is very slow. This production of photons is known as decoupling, which leads to recombination sometimes being called photon decoupling, but recombination and photon decoupling are distinct events. Once photons decoupled from matter, they traveled freely through the universe without interacting with matter and constitute what is observed today as cosmic microwave background radiation (in that sense, the cosmic background radiation is infrared and some red black-body radiation emitted when the universe was at a temperature of some 3000 K, redshifted by a factor of from the visible spectrum to the microwave spectrum). Recombination time frames The time frame for recombination can be estimated from the time dependence of the temperature of the cosmic microwave background (CMB). The microwave background is a blackbody spectrum representing the photons present at recombination, shifted in energy by the expansion of the universe. A blackbody is completely characterized by its temperature; the shift is called the redshift denoted by z: where 2.7 K is today's temperature. The thermal energy at the peak of the blackbody spectrum is the Boltzmann constant, , times the temperature, but simply comparing this to the ionization energy of hydrogen atoms will not consider the spectrum of energies. A better estimate evaluates the thermal equilibrium between matter (atoms) and radiation. The density of photons, with energy E sufficient to ionize hydrogen is the total density times a factor from the equilibrium Boltzmann distribution: At equilibrium this will approximately equal the matter (baryon) density. The ratio of photons to baryons, , is known from several sources including measurements by the Planck satellite to be around 109. Solving for gives value around 1100, which converts to a cosmic time value around 400,000 years. Recombination history of hydrogen The cosmic ionization history is generally described in terms of the free electron fraction xe as a function of redshift. It is the ratio of the abundance of free electrons to the total abundance of hydrogen (both neutral and ionized). Denoting by ne the number density of free electrons, nH that of atomic hydrogen and np that of ionized hydrogen (i.e. protons), xe is defined as Since hydrogen only recombines once helium is fully neutral, charge neutrality implies ne = np, i.e. xe is also the fraction of ionized hydrogen. Rough estimate from equilibrium theory It is possible to find a rough estimate of the redshift of the recombination epoch assuming the recombination reaction is fast enough that it proceeds near thermal equilibrium. The relative abundance of free electrons, protons and neutral hydrogen is then given by the Saha equation: where me is the mass of the electron, kB is the Boltzmann constant, T is the temperature, ħ is the reduced Planck constant, and EI = 13.6 eV is the ionization energy of hydrogen. Charge neutrality requires ne = np, and the Saha equation can be rewritten in terms of the free electron fraction xe: All quantities in the right-hand side are known functions of z, the redshift: the temperature is given by , and the total density of hydrogen (neutral and ionized) is given by . Solving this equation for a 50 percent ionization fraction yields a recombination temperature of roughly , corresponding to redshift z = . Effective three-level atom In 1968, physicists Jim Peebles in the US and Yakov Borisovich Zel'dovich and collaborators in the USSR independently computed the non-equilibrium recombination history of hydrogen. The basic elements of the model are the following. Direct recombinations to the ground state of hydrogen are very inefficient: each such event leads to a photon with energy greater than 13.6 eV, which almost immediately re-ionizes a neighboring hydrogen atom. Electrons therefore only efficiently recombine to the excited states of hydrogen, from which they cascade very quickly down to the first excited state, with principal quantum number . From the first excited state, electrons can reach the ground state n = 1 through two pathways: Decay from the 2p state by emitting a Lyman-α photon. This photon will almost always be reabsorbed by another hydrogen atom in its ground state. However, cosmological redshifting systematically decreases the photon frequency, and there is a small chance that it escapes reabsorption if it gets redshifted far enough from the Lyman-α line resonant frequency before encountering another hydrogen atom. Decay from the 2s state by emitting two photons. This two-photon decay process is very slow, with a rate of 8.22 s−1. It is however competitive with the slow rate of Lyman-α escape in producing ground-state hydrogen. Atoms in the first excited state may also be re-ionized by the ambient CMB photons before they reach the ground state. When this is the case, it is as if the recombination to the excited state did not happen in the first place. To account for this possibility, Peebles defines the factor C as the probability that an atom in the first excited state reaches the ground state through either of the two pathways described above before being photoionized. This model is usually described as an "effective three-level atom" as it requires keeping track of hydrogen under three forms: in its ground state, in its first excited state (assuming all the higher excited states are in Boltzmann equilibrium with it), and in its ionized state. Accounting for these processes, the recombination history is then described by the differential equation where is the "case B" recombination coefficient to the excited states of hydrogen, is the corresponding photoionization rate and E21 = 10.2 eV is the energy of the first excited state. Note that the second term in the right-hand side of the above equation can be obtained by a detailed balance argument. The equilibrium result given in the previous section would be recovered by setting the left-hand side to zero, i.e. assuming that the net rates of recombination and photoionization are large in comparison to the Hubble expansion rate, which sets the overall evolution timescale for the temperature and density. However, is comparable to the Hubble expansion rate, and even gets significantly lower at low redshifts, leading to an evolution of the free electron fraction much slower than what one would obtain from the Saha equilibrium calculation. With modern values of cosmological parameters, one finds that the universe is 90% neutral at . Modern developments The simple effective three-level atom model described above accounts for the most important physical processes. However it does rely on approximations that lead to errors on the predicted recombination history at the level of 10% or so. Due to the importance of recombination for the precise prediction of cosmic microwave background anisotropies, several research groups have revisited the details of this picture over the last two decades. The refinements to the theory can be divided into two categories: Accounting for the non-equilibrium populations of the highly excited states of hydrogen. This effectively amounts to modifying the recombination coefficient αB. Accurately computing the rate of Lyman-α escape and the effect of these photons on the 2s–1s transition. This requires solving a time-dependent radiative transfer equation. In addition, one needs to account for higher-order Lyman transitions. These refinements effectively amount to a modification of Peebles' C factor. Modern recombination theory is believed to be accurate at the level of 0.1%, and is implemented in publicly available fast recombination codes. Primordial helium recombination Helium nuclei are produced during Big Bang nucleosynthesis, and make up about 24% of the total mass of baryonic matter. The ionization energy of helium is larger than that of hydrogen and it therefore recombines earlier. Because neutral helium carries two electrons, its recombination proceeds in two steps. The first recombination, proceeds near Saha equilibrium and takes place around redshift z ≈ 6000. The second recombination, , is slower than what would be predicted from Saha equilibrium and takes place around redshift z ≈ 2000. The details of helium recombination are less critical than those of hydrogen recombination for the prediction of cosmic microwave background anisotropies, since the universe is still very optically thick after helium has recombined and before hydrogen has started its recombination. Primordial light barrier Prior to recombination, photons were not able to freely travel through the universe, as they constantly scattered off the free electrons and protons. This scattering causes a loss of information, and "there is therefore a photon barrier at a redshift" near that of recombination that prevents us from using photons directly to learn about the universe at larger redshifts. Once recombination had occurred, however, the mean free path of photons greatly increased due to the lower number of free electrons. Shortly after recombination, the photon mean free path became larger than the Hubble length, and photons traveled freely without interacting with matter. For this reason, recombination is closely associated with the last scattering surface, which is the name for the last time at which the photons in the cosmic microwave background interacted with matter. However, these two events are distinct, and in a universe with different values for the baryon-to-photon ratio and matter density, recombination and photon decoupling need not have occurred at the same epoch.
Physical sciences
Physical cosmology
Astronomy
26246088
https://en.wikipedia.org/wiki/IEEE%201394
IEEE 1394
IEEE 1394 is an interface standard for a serial bus for high-speed communications and isochronous real-time data transfer. It was developed in the late 1980s and early 1990s by Apple in cooperation with a number of companies, primarily Sony and Panasonic. It is most commonly known by the name FireWire (Apple), though other brand names exist such as i.LINK (Sony), and Lynx (Texas Instruments). The copper cable used in its most common implementation can be up to long. Power and data is carried over this cable, allowing devices with moderate power requirements to operate without a separate power supply. FireWire is also available in Cat 5 and optical fiber versions. The 1394 interface is comparable to USB. USB was developed subsequently and gained much greater market share. USB requires a host controller whereas IEEE 1394 is cooperatively managed by the connected devices. History and development FireWire is Apple's name for the IEEE 1394 High Speed Serial Bus. Its development was initiated by Apple in 1986, and developed by the IEEE P1394 Working Group, largely driven by contributions from Sony (102 patents), Apple (58 patents), Panasonic (46 patents), and Philips (43 patents), in addition to contributions made by engineers from LG Electronics, Toshiba, Hitachi, Canon, INMOS/SGS Thomson (now STMicroelectronics), and Texas Instruments. IEEE 1394 is a serial bus architecture for high-speed data transfer, serial meaning that information is transferred one bit at a time. Parallel buses utilize a number of different physical connections, and as such are usually more costly and typically heavier. IEEE 1394 fully supports both isochronous and asynchronous applications. Apple intended FireWire to be a serial replacement for the parallel SCSI bus, while providing connectivity for digital audio and video equipment. Apple's development began in the late 1980s, later presented to the IEEE, and was completed in January 1995. In 2007, IEEE 1394 was a composite of four documents: the original IEEE Std. 1394–1995, the IEEE Std. 1394a-2000 amendment, the IEEE Std. 1394b-2002 amendment, and the IEEE Std. 1394c-2006 amendment. On June 12, 2008, all these amendments as well as errata and some technical updates were incorporated into a superseding standard, IEEE Std. 1394–2008. Apple first included onboard FireWire in some of its 1999 Macintosh models (though it had been a build-to-order option on some models since 1997), and most Apple Macintosh computers manufactured in the years 2000 through 2011 included FireWire ports. However, in February 2011 Apple introduced the first commercially available computer with Thunderbolt. Apple released its last computers with FireWire in 2012. By 2014, Thunderbolt had become a standard feature across Apple's entire line of computers (later with the exception of the 12-inch MacBook introduced in 2015, which featured only a sole USB-C port), effectively becoming the spiritual successor to FireWire in the Apple ecosystem. Apple's last products with FireWire, the Thunderbolt Display and 2012 13-inch MacBook Pro, were discontinued in 2016. Apple previously sold a Thunderbolt to FireWire Adapter, which provided one FireWire 800 port. A separate adapter was required to use it with Thunderbolt 3. Sony's implementation of the system, i.LINK, used a smaller connector with only four signal conductors, omitting the two conductors that provide power for devices in favor of a separate power connector. This style was later added into the 1394a amendment. This port is sometimes labeled S100 or S400 to indicate speed in Mbit/s. The system was commonly used to connect data storage devices and DV (digital video) cameras, but was also popular in industrial systems for machine vision and professional audio systems. Many users preferred it over the more common USB 2.0 for its then greater effective speed and power distribution capabilities. Benchmarks show that the sustained data transfer rates are higher for FireWire than for USB 2.0, but lower than USB 3.0. Results are marked on Apple Mac OS X but more varied on Microsoft Windows. Patent considerations Implementation of IEEE 1394 is said to require use of 261 issued international patents held by ten corporations. Use of these patents requires licensing; use without license generally constitutes patent infringement. Companies holding IEEE 1394 IP formed a patent pool with MPEG LA, LLC as the license administrator, to whom they licensed patents. MPEG LA sublicenses these patents to providers of equipment implementing IEEE 1394. Under the typical patent pool license, a royalty of US$0.25 per unit is payable by the manufacturer upon the manufacture of each 1394 finished product; no royalties are payable by users. The last of the patents, MY 120654 by Sony, expired on November 30, 2020. , the following are patent holders of the IEEE 1394 standard, as listed in the patent pool managed by MPEG LA. A person or company may review the actual 1394 Patent Portfolio License upon request to MPEG LA. MPEG LA does not provide assurance of protection to licensees beyond its own patents. At least one formerly licensed patent is known to have been removed from the pool, and other hardware patents exist that reference IEEE 1394. The 1394 High Performance Serial Bus Trade Association (the 1394 TA) was formed to aid the marketing of IEEE 1394. Its bylaws prohibit dealing with intellectual property issues. The 1394 Trade Association operates on an individual no cost membership basis to further enhancements to 1394 standards. The Trade Association also is the library source for all 1394 documentation and standards available. Technical specifications FireWire can connect up to 63 peripherals in a tree or daisy-chain topology (as opposed to Parallel SCSI's electrical bus topology). It allows peer-to-peer device communication — such as communication between a scanner and a printer — to take place without using system memory or the CPU. FireWire also supports multiple host controllers per bus. It is designed to support plug and play and hot swapping. The copper cable it uses in its most common implementation can be up to long and is more flexible than most parallel SCSI cables. In its six-conductor or nine-conductor variations, it can supply up to 45 watts of power per port at up to 30 volts, allowing moderate-consumption devices to operate without a separate power supply. FireWire devices implement the ISO/IEC 13213 configuration ROM model for device configuration and identification, to provide plug-and-play capability. All FireWire devices are identified by an IEEE EUI-64 unique identifier in addition to well-known codes indicating the type of device and the protocols it supports. FireWire devices are organized at the bus in a tree topology. Each device has a unique self-ID. One of the nodes is elected root node and always has the highest ID. The self-IDs are assigned during the self-ID process, which happens after each bus resets. The order in which the self-IDs are assigned is equivalent to traversing the tree depth-first, post-order. FireWire is capable of safely operating critical systems due to the way multiple devices interact with the bus and how the bus allocates bandwidth to the devices. FireWire is capable of both asynchronous and isochronous transfer methods at once. Isochronous data transfers are transfers for devices that require continuous, guaranteed bandwidth. In an aircraft, for instance, isochronous devices include control of the rudder, mouse operations and data from pressure sensors outside the aircraft. All these elements require constant, uninterrupted bandwidth. To support both elements, FireWire dedicates a certain percentage to isochronous data and the rest to asynchronous data. In IEEE 1394, 80% of the bus is reserved for isochronous cycles, leaving asynchronous data with a minimum of 20% of the bus. Encoding scheme FireWire uses Data/Strobe encoding (D/S encoding). In D/S encoding, two non-return-to-zero (NRZ) signals are used to transmit the data with high reliability. The NRZ signal sent is fed with the clock signal through an XOR gate, creating a strobe signal. This strobe is then put through another XOR gate along with the data signal to reconstruct the clock. This in turn acts as the bus's phase-locked loop for synchronization purposes. Arbitration The process of the bus deciding which node gets to transmit data at what time is known as arbitration. Each arbitration round lasts about 125 microseconds. During the round, the root node (device nearest the processor) sends a cycle start packet. All nodes requiring data transfer respond, with the closest node winning. After the node is finished, the remaining nodes take turns in order. This repeats until all the devices have used their portion of the 125 microseconds, with isochronous transfers having priority. Standards and versions The previous standards and its three published amendments are now incorporated into a superseding standard, IEEE 1394-2008. The features individually added give a good history on the development path. FireWire 400 (IEEE 1394-1995) The original release of IEEE 1394-1995 specified what is now known as FireWire 400. It can transfer data between devices at 100, 200, or 400 Mbit/s half-duplex data rates (the actual transfer rates are 98.304, 196.608, and 393.216 Mbit/s, i.e., 12.288, 24.576 and 49.152 MB/s respectively). These different transfer modes are commonly referred to as S100, S200, and S400. Cable length is limited to , although up to 16 cables can be daisy chained using active repeaters, e.g. external hubs or the internal hubs that are often present in FireWire equipment. The S400 standard limits any configuration's maximum cable length to . The 6-conductor connector is commonly found on desktop computers and can supply the connected device with power. The 6-conductor powered connector, now referred to as an alpha connector, adds power output to support external devices. Typically a device can pull about 7 to 8 watts from the port; however, the voltage varies significantly from different devices. Voltage is specified as unregulated and should nominally be about 25 volts (range 24 to 30). Apple's implementation on laptops is typically related to battery power and can be as low as 9 V. Improvements (IEEE 1394a-2000) An amendment, IEEE 1394a, was released in 2000, which clarified and improved the original specification. It added support for asynchronous streaming, quicker bus reconfiguration, packet concatenation, and a power-saving suspend mode. IEEE 1394a offers a couple of advantages over the original IEEE 1394–1995. 1394a is capable of arbitration accelerations, allowing the bus to accelerate arbitration cycles to improve efficiency. It also allows for arbitrated short bus reset, in which a node can be added or dropped without causing a big drop in isochronous transmission. 1394a also standardized the 4-conductor alpha connector developed by Sony and trademarked as i.LINK, already widely in use on consumer devices such as camcorders, most PC laptops, a number of PC desktops, and other small FireWire devices. The 4-conductor connector is fully data-compatible with 6-conductor alpha interfaces but lacks power connectors. FireWire 800 (IEEE 1394b-2002) IEEE 1394b-2002 introduced FireWire 800 (Apple's name for the 9-conductor S800 bilingual version of the IEEE 1394b standard). This specification added a new encoding scheme termed beta mode which allowed compliant devices to operate at 786.432 Mbit/s full-duplex. It is backwards compatible with the slower rates and 6-conductor alpha connectors of FireWire 400. However, while the IEEE 1394a and IEEE 1394b standards are compatible, FireWire 800's connector, referred to as a beta connector, is different from FireWire 400's alpha connectors, making legacy cables incompatible. A bilingual cable allows the connection of older devices to the newer port. In 2003, Apple was the first to introduce commercial products with the new connector, including a new model of the Power Mac G4 and a 17" PowerBook G4. The full IEEE 1394b specification supports data rates up to 3200 Mbit/s (i.e., 400 MB/s) over beta-mode or optical connections up to in length. Standard category 5e cable supports at S100. The original 1394 and 1394a standards used data/strobe (D/S) encoding, now known as alpha mode, with the cables, while 1394b added a data encoding scheme called 8b/10b referred to as beta mode. Beta mode is based on 8b/10b (from Gigabit Ethernet, also used for many other protocols). 8b/10b encoding involves expanding an 8-bit data word into 10 bits, with the extra bits after the 5th and 8th data bits. The partitioned data is sent through a Running Disparity calculator function. The Running Disparity calculator attempts to keep the number of 1s transmitted equal to 0s, thereby assuring a DC-balanced signal. Then, the different partitions are sent through a 5b/6b encoder for the 5-bit partition and a 3b/4b encoder for the 3-bit partition. This gives the packet the ability to have at least two 1s, ensuring synchronization of the PLL at the receiving end to the correct bit boundaries for reliable transfer. An additional function of the coding scheme is to support the arbitration for bus access and general bus control. This is possible due to the surplus symbols afforded by the 8b/10b expansion. (While 8-bit symbols can encode a maximum of 256 values, 10-bit symbols permit the encoding of up to 1024.) Symbols invalid for the current state of the receiving PHY indicate data errors. FireWire S800T (IEEE 1394c-2006) IEEE 1394c-2006 was published on June 8, 2007. It provided a major technical improvement, namely new port specification that provides 800 Mbit/s over the same 8P8C (Ethernet) connectors with Category 5e cable, which is specified in IEEE 802.3 clause 40 (gigabit Ethernet over copper twisted pair) along with a corresponding automatic negotiation that allows the same port to connect to either IEEE Std 1394 or IEEE 802.3 (Ethernet) devices. FireWire S1600 and S3200 In December 2007, the 1394 Trade Association announced that products would be available before the end of 2008 using the S1600 and S3200 modes that, for the most part, had already been defined in 1394b and were further clarified in IEEE Std. 1394–2008. The 1.572864 Gbit/s and 3.145728 Gbit/s devices use the same 9-conductor beta connectors as the existing FireWire 800 and are fully compatible with existing S400 and S800 devices. It competes with USB 3.0. S1600 (Symwave) and S3200 (Dap Technology) development units have been made, however because of FPGA technology DapTechnology targeted S1600 implementations first with S3200 not becoming commercially available until 2012. Steve Jobs declared FireWire dead in 2008. , there were few S1600 devices released, with a Sony camera being the only notable user. Future enhancements (including P1394d) A project named IEEE P1394d was formed by the IEEE on March 9, 2009 to add single-mode fiber as an additional transport medium to FireWire. The project was withdrawn in 2013. Other future iterations of FireWire were expected to increase speed to 6.4 Gbit/s and additional connectors such as the small multimedia interface. Operating system support Full support for IEEE 1394a and 1394b is available for Microsoft Windows, FreeBSD, Linux, Apple Mac OS 8.6 through macOS 14 Sonoma and NetBSD. In Windows XP, a degradation in performance of 1394 devices may have occurred with installation of Service Pack 2. This was resolved in Hotfix 885222 and in SP3. Some FireWire hardware manufacturers also provide custom device drivers that replace the Microsoft OHCI host adapter driver stack, enabling S800-capable devices to run at full 800 Mbit/s transfer rates on older versions of Windows (XP SP2 w/o Hotfix 885222) and Windows Vista. At the time of its release, Microsoft Windows Vista supported only 1394a, with assurances that 1394b support would come in the next service pack. Service Pack 1 for Microsoft Windows Vista has since been released, however the addition of 1394b support is not mentioned anywhere in the release documentation. The 1394 bus driver was rewritten for Windows 7 to provide support for higher speeds and alternative media. In Linux, support was originally provided by libraw1394 making direct communication between user space and IEEE 1394 buses. Subsequently, a new kernel driver stack, nicknamed JuJu, has been implemented. Cable TV system support Under FCC Code 47 CFR 76.640 section 4, subsections 1 and 2, Cable TV providers (in the US, with digital systems) must, upon request of a customer, have provided a high-definition capable cable box with a functional FireWire interface. This applied only to customers leasing high-definition capable cable boxes from their cable provider after April 1, 2004. The interface can be used to display or record Cable TV, including HDTV programming. In June 2010, the FCC issued an order that permitted set-top boxes to include IP-based interfaces in place of FireWire. Comparison with USB While both technologies provide similar end results, there are fundamental differences between USB and FireWire. USB requires the presence of a host controller, typically a PC, which connects point to point with the USB device. This allows for simpler (and lower-cost) peripherals, at the cost of lowered functionality of the bus. Intelligent hubs are required to connect multiple USB devices to a single USB host controller. By contrast, FireWire is essentially a peer-to-peer network (where any device may serve as the host or client), allowing multiple devices to be connected on one bus. The FireWire host interface supports DMA and memory-mapped devices, allowing data transfers to happen without loading the host CPU with interrupts and buffer-copy operations. Additionally, FireWire features two data buses for each segment of the bus network, whereas, until USB 3.0, USB featured only one. This means that FireWire can have communication in both directions at the same time (full-duplex), whereas USB communication prior to 3.0 can only occur in one direction at any one time (half-duplex). While USB 2.0 expanded into the fully backwards-compatible USB 3.0 and 3.1 (using the same main connector type), FireWire used a different connector between 400 and 800 implementations. Common applications Consumer automobiles IDB-1394 Customer Convenience Port (CCP) was the automotive version of the 1394 standard. Consumer audio and video IEEE 1394 was the High-Definition Audio-Video Network Alliance (HANA) standard connection interface for A/V (audio/visual) component communication and control. HANA was dissolved in September 2009 and the 1394 Trade Association assumed control of all HANA-generated intellectual property. Military and aerospace vehicles SAE Aerospace standard AS5643 originally released in 2004 and reaffirmed in 2013 establishes IEEE-1394 standards as a military and aerospace databus network in those vehicles. AS5643 is utilized by several large programs, including the F-35 Lightning II, the X-47B UCAV aircraft, AGM-154 weapon and JPSS-1 polar satellite for NOAA. AS5643 combines existing 1394-2008 features like looped topology with additional features like transformer isolation and time synchronization, to create deterministic double and triple fault-tolerant data bus networks. General networking FireWire can be used for ad hoc (terminals only, no routers except where a FireWire hub is used) computer networks. Specifically, RFC 2734 specifies how to run IPv4 over the FireWire interface, and RFC 3146 specifies how to run IPv6. Mac OS X, Linux, and FreeBSD include support for networking over FireWire. Windows 95, Windows 98, Windows Me, Windows XP and Windows Server 2003 include native support for IEEE 1394 networking. Windows 2000 does not have native support but may work with third party drivers. A network can be set up between two computers using a single standard FireWire cable, or by multiple computers through use of a hub. This is similar to Ethernet networks with the major differences being transfer speed, conductor length, and the fact that standard FireWire cables can be used for point-to-point communication. On December 4, 2004, Microsoft announced that it would discontinue support for IP networking over the FireWire interface in all future versions of Microsoft Windows. Consequently, support for this feature is absent from Windows Vista and later Windows releases. Microsoft rewrote their 1394 driver in Windows 7 but networking support for FireWire is not present. Unibrain offers free FireWire networking drivers for Windows called ubCore, which support Windows Vista and later versions. Earlier models of the PlayStation 2 console (SCPH 1000x to 3900x series) had an i.LINK-branded 1394 connector. This was used for networking until the release of an Ethernet adapter later in the console's lifespan, but very few software titles supported the feature. The connector was removed from the SCPH 5000x series onward. IIDC IIDC (Instrumentation & Industrial Digital Camera) is the FireWire data format standard for live video, and is used by Apple's iSight A/V camera. The system was designed for machine vision systems but is also used for other computer vision applications and for some webcams. Although they are easily confused since they both run over FireWire, IIDC is different from, and incompatible with, the ubiquitous AV/C (Audio Video Control) used to control camcorders and other consumer video devices. DV Digital Video (DV) is a standard protocol used by some digital camcorders. All DV cameras that recorded to tape media had a FireWire interface (usually a 4-conductor). All DV ports on camcorders only operate at the slower 100 Mbit/s speed of FireWire. This presents operational issues if the camcorder is daisy chained from a faster S400 device or via a common hub because any segment of a FireWire network cannot support multiple speed communication. Labeling of the port varied by manufacturer, with Sony using either its i.LINK trademark or the letters DV. Many digital video recorders have a DV-input FireWire connector (usually an alpha connector) that can be used to record video directly from a DV camcorder (computer-free). The protocol also accommodates remote control (play, rewind, etc.) of connected devices, and can stream time code from a camera. USB is unsuitable for the transfer of the video data from tape because tape by its very nature does not support variable data rates. USB relies heavily on processor support and this was not guaranteed to service the USB port in time. The later move away from tape towards solid-state memory or disc media (e.g., SD Cards, optical disks or hard drives) has facilitated moving to USB transfer because file-based data can be moved in segments as required. Frame grabbers IEEE 1394 interface is commonly found in frame grabbers, devices that capture and digitize an analog video signal; however, IEEE 1394 is facing competition from the Gigabit Ethernet interface (citing speed and availability issues). iPod and iPhone synchronization and charging iPods released prior to the iPod with Dock Connector used IEEE 1394a ports for transferring music files and charging, but in 2003, the FireWire port in iPods was succeeded by Apple's dock connector and IEEE 1394 to 30-pin connector cables were made. Apple began removing backwards compatibility with FireWire cables starting with the first generation iPod nano and fifth generation iPod, both of which could only sync via USB but retained the ability to charge through FireWire. This was also carried over to the second and third generation nanos as well as the iPod Classic. Backwards compatibility was removed completely beginning with the iPhone 3G, second generation iPod touch, and the fourth generation iPod nano, all of which could only charge and sync via USB. Security issues Devices on a FireWire bus can communicate by direct memory access (DMA), where a device can use hardware to map internal memory to FireWire's physical memory space. The SBP-2 (Serial Bus Protocol 2) used by FireWire disk drives uses this capability to minimize interrupts and buffer copies. In SBP-2, the initiator (controlling device) sends a request by remotely writing a command into a specified area of the target's FireWire address space. This command usually includes buffer addresses in the initiator's FireWire Physical Address Space, which the target is supposed to use for moving I/O data to and from the initiator. On many implementations, particularly those like PCs and Macs using the popular OHCI, the mapping between the FireWire physical memory space and device physical memory is done in hardware, without operating system intervention. While this enables high-speed and low-latency communication between data sources and sinks without unnecessary copying (such as between a video camera and a software video recording application, or between a disk drive and the application buffers), this can also be a security or media rights-restriction risk if untrustworthy devices are attached to the bus and initiate a DMA attack. One of the applications known to exploit this to gain unauthorized access to running Windows, Mac OS and Linux computers is the spyware FinFireWire. For this reason, high-security installations typically either use newer machines that map a virtual memory space to the FireWire physical memory space (such as a Power Mac G5, or any Sun workstation), disable relevant drivers at operating system level, disable the OHCI hardware mapping between FireWire and device memory, physically disable the entire FireWire interface, or opt to not use FireWire or other hardware like PCMCIA, PC Card, ExpressCard or Thunderbolt, which expose DMA to external components. An unsecured FireWire interface can be used to debug a machine whose operating system has crashed, and in some systems for remote-console operations. Windows natively supports this scenario of kernel debugging, although newer Windows Insider Preview builds no longer include the ability out of the box. On FreeBSD, the dcons driver provides both, using gdb as debugger. Under Linux, firescope and fireproxy exist.
Technology
User interface
null
35034187
https://en.wikipedia.org/wiki/Film%20capacitor
Film capacitor
Film capacitors, plastic film capacitors, film dielectric capacitors, or polymer film capacitors, generically called film caps as well as power film capacitors, are electrical capacitors with an insulating plastic film as the dielectric, sometimes combined with paper as carrier of the electrodes. The dielectric films, depending on the desired dielectric strength, are drawn in a special process to an extremely thin thickness, and are then provided with electrodes. The electrodes of film capacitors may be metallized aluminum or zinc applied directly to the surface of the plastic film, or a separate metallic foil. Two of these conductive layers are wound into a cylinder-shaped winding, usually flattened to reduce mounting space requirements on a printed circuit board, or layered as multiple single layers stacked together, to form a capacitor body. Film capacitors, together with ceramic capacitors and electrolytic capacitors, are the most common capacitor types for use in electronic equipment, and are used in many AC and DC microelectronics and electronics circuits. A related component type is the power (film) capacitor. Although the materials and construction techniques used for large power film capacitors are very similar to those used for ordinary film capacitors, capacitors with high to very high power ratings for applications in power systems and electrical installations are often classified separately, for historical reasons. As modern electronic equipment gained the capacity to handle power levels that were previously the exclusive domain of "electrical power" components, the distinction between the "electronic" and "electrical" power ratings has become less distinct. In the past, the boundary between these two families was approximately at a reactive power of 200 volt-amperes, but modern power electronics can handle increasing power levels. Overview of construction and features Film capacitors are made out of two pieces of plastic film covered with metallic electrodes, wound into a cylindrical shaped winding, with terminals attached, and then encapsulated. In general, film capacitors are not polarized, so the two terminals are interchangeable. There are two different types of plastic film capacitors, made with two different electrode configurations: Film/foil capacitors or metal foil capacitors are made with two plastic films as the dielectric. Each is layered with a thin metal foil, usually aluminum, as the electrodes. Advantages of this construction type are easy electrical connection to the metal foil electrodes, and its ability to handle high current surges. Metallized film capacitors are made of two metallized films with plastic film as the dielectric. A very thin (~ 0.03 μm) vacuum-deposited aluminum metallization is applied to one or both sides to serve as electrodes. This configuration can have "self-healing" properties, in that dielectric breakdowns or short circuits between the electrodes do not necessarily lead to the destruction of the component. With this basic design, it is possible to make high quality products such as "zero defect" capacitors and to produce wound capacitors with larger capacitance values (up to 100 μF and larger) in smaller cases (high volumetric efficiency) compared to film/foil construction. However, a disadvantage of metallized construction is its limited current surge rating. A key advantage of modern film capacitor internal construction is direct contact to the electrodes on both ends of the winding. This contact keeps all current paths to the entire electrode very short. The setup behaves like a large number of individual capacitors connected in parallel, thus reducing the internal ohmic losses (ESR) and the parasitic inductance (ESL). The inherent geometry of film capacitor structure results in very low ohmic losses and a very low parasitic inductance, which makes them especially suitable for applications with very high surge currents (snubbers) and for AC power applications, or for applications at higher frequencies. Another feature of film capacitors is the possibility of choosing different film materials for the dielectric layer to select for desirable electrical characteristics, such as stability, wide temperature range, or ability to withstand very high voltages. Polypropylene film capacitors are specified because of their low electrical losses and their nearly linear behavior over a very wide frequency range, for stability Class 1 applications in resonant circuits, comparable only with ceramic capacitors. For simple high frequency filter circuits, polyester capacitors offer low-cost solutions with excellent long-term stability, allowing replacement of more expensive tantalum electrolytic capacitors. The film/foil variants of plastic film capacitors are especially capable of handling high and very high current surges. Typical capacitance values of smaller film capacitors used in electronics start around 100 picofarads and extend upwards to microfarads. Unique mechanical properties of plastic and paper films in some special configurations allow them to be used in capacitors of very large dimensions. The larger film capacitors are used as power capacitors in electrical power installations and plants, capable of withstanding very high power or very high applied voltages. The dielectric strength of these capacitors can reach into the four-digit voltage range. Internal structure The formula for capacitance (C) of a plate capacitor is: (ε stands for dielectric permittivity; A for electrode surface area; and d for the distance between the electrodes). According to the equation, a thinner dielectric or a larger electrode area both will increase the capacitance value, as will a dielectric material of higher permittivity. Example manufacturing process The following example describes a typical manufacturing process flow for wound metallized plastic film capacitors. Film stretching and metallization — To increase the capacitance value of the capacitor, the plastic film is drawn using a special extrusion process of bi-axial stretching in longitudinal and transverse directions, as thin as is technically possible and as allowed by the desired breakdown voltage. The thickness of these films can be as little as 0.6 μm. In a suitable evaporation system and under high vacuum conditions (about 1015 to 1019 molecules of air per cubic meter) the plastic film is metallized with aluminum or zinc. It is then wound onto a so-called "mother roll" with a width of about 1 meter. Film slitting — Next, the mother rolls are slit into small strips of plastic film in the required width according to the size of the capacitors being manufactured. Winding — Two films are rolled together into a cylindrical winding. The two metallized films that make up a capacitor are wound slightly offset from each other, so that by the arrangement of the electrodes one edge of the metallization on each end of the winding extends out laterally. Flattening — The winding is usually flattened into an oval shape by applying mechanical pressure. Because the cost of a printed circuit board is calculated per square millimeter, a smaller capacitor footprint reduces the overall cost of the circuit. Application of metallic contact layer ("schoopage") — The projecting end electrodes are covered with a liquefied contact metal (such as tin, zinc or aluminum), which is sprayed with compressed air on both lateral ends of the winding. This metallizing process is named schoopage after Swiss engineer Max Schoop, who invented a combustion spray application for tin and lead. Healing — The windings which are now electrically connected by the schoopage have to be "healed". This is done by applying a precisely calibrated voltage across the electrodes of the winding so that any existing defects will be "burned away" (see also "self-healing" below). Impregnation — For increased protection of the capacitor against environmental influences, especially moisture, the winding is impregnated with an insulating fluid, such as silicone oil. Attachment of terminals — The terminals of the capacitor are soldered or welded on the end metal contact layers of the schoopage. Coating — After attaching the terminals, the capacitor body is potted into an external casing, or is dipped into a protective coating. For lowest production costs some film capacitors can be used "naked", without further coating of the winding. Electrical final test — All capacitors (100%) should be tested for the most important electrical parameters, capacitance (C), dissipation factor (tan δ) and impedance (Z). The production of wound film/metal foil capacitors with metal foil instead of metallized films is done in a very similar way. As an alternative to the traditional wound construction of film capacitors, they can also be manufactured in a "stacked" configuration. For this version, the two metallized films representing the electrodes are wound on a much larger core with a diameter of more than 1 m. So-called multi-layer capacitors (MLP, Multilayer Polymer Capacitors) can be produced by sawing this large winding into many smaller single segments. The sawing causes defects on the collateral sides of the capacitors which are later burned out (self-healing) during the manufacturing process. Low-cost metallized plastic film capacitors for general purpose applications are produced in this manner. This technique is also used to produce capacitor "dice" for Surface Mount Device (SMD) packaged components. Self-healing of metallized film capacitors Metallized film capacitors have "self-healing" properties, which are not available from film/foil configurations. When sufficient voltage is applied, a point-defect short-circuit between the metallized electrodes vaporizes due to high arc temperature, since both the dielectric plastic material at the breakdown point and the metallized electrodes around the breakdown point are very thin (about 0.02 to 0.05 μm). The point-defect cause of the short-circuit is burned out, and the resulting vapor pressure blows the arc away, too. This process can complete in less than 10 μs, often without interrupting the useful operation of the afflicted capacitor. This property of self-healing allows the use of a single-layer winding of metallized films without any additional protection against defects, and thereby leads to a reduction in the amount of the physical space required to achieve a given performance specification. In other words, the so-called "volumetric efficiency" of the capacitor is increased. The self-healing capability of metallized films is used multiple times during the manufacturing process of metallized film capacitors. Typically, after slitting the metallized film to the desired width, any resulting defects can be burned out (healed) by applying a suitable voltage before winding. The same method is also used after the metallization of the contact surfaces ("schoopage") to remove any defects in the capacitor caused by the secondary metallization process. The "pinholes" in the metallization caused by the self-healing arcs reduce the capacitance of the capacitor very slightly. However, the magnitude of this reduction is quite low; even with several thousand defects to be burned out, this reduction usually is much smaller than 1% of the total capacitance of the capacitor. For larger film capacitors with very high standards for stability and long lifetime, such as snubber capacitors, the metallization can be made with a special fault isolation pattern. In the picture on the right hand side, such a metallization formed into a "T" pattern is shown. Each of these "T" patterns produces a deliberately narrowed cross-section in the conductive metallization. These restrictions work like microscopic fuses so that if a point-defect short-circuit between the electrodes occurs, the high current of the short only burns out the fuses around the fault. The affected sections are thus disconnected and isolated in a controlled manner, without any explosions surrounding a larger short-circuit arc. Therefore, the area affected is limited and the fault is gently controlled, significantly reducing internal damage to the capacitor, which can thus remain in service with only an infinitesimal reduction in capacitance. In field installations of electrical power distribution equipment, capacitor bank fault tolerance is often improved by connecting multiple capacitors in parallel, each protected with an internal or external fuse. Should an individual capacitor develop an internal short, the resulting fault current (augmented by capacitive discharge from neighboring capacitors) blows the fuse, thus isolating the failed capacitor from the remaining devices. This technique is analogous to the "T metallization" technique described above, but operating at a larger physical scale. More-complex series and parallel arrangements of capacitor banks are also used to allow continuity of service despite individual capacitor failures at this larger scale. Internal structure to increase voltage ratings The rated voltage of different film materials depends on factors such as the thickness of the film, the quality of the material (freedom from physical defects and chemical impurities), the ambient temperature, and frequency of operation, plus a safety margin against the breakdown voltage (dielectric strength). But to a first approximation, the voltage rating of a film capacitor depends primarily on the thickness of the plastic film. For example, with the minimum available film thickness of polyester film capacitors (about 0.7 μm), it is possible to produce capacitors with a rated voltage of 400 VDC. If higher voltages are needed, typically a thicker plastic film will be used. But the breakdown voltage for dielectric films is usually nonlinear. For thicknesses greater than about 5 mils, the breakdown voltage only increases approximately with the square-root of the film thickness. On the other hand, the capacitance decreases linearly with increased film thickness. For reasons of availability, storage and existing processing capabilities, it is desirable to achieve higher breakdown voltages while using existing available film materials. This can be achieved by a one-sided partial metallization of the insulating films in such a manner that an internal series connection of capacitors is produced. By using this series connection technique, the total breakdown voltage of the capacitor can be multiplied by an arbitrary factor, but the total capacitance is also reduced by the same factor. The breakdown voltage can be increased by using one-sided partially metallized films, or the breakdown voltage of the capacitor can be increased by using double-sided metallized films. Double-sided metallized films also can be combined with internal series-connected capacitors by partial metallization. These multiple technique designs are especially used for high-reliability applications with polypropylene films. Internal structure to increase surge ratings An important property of film capacitors is their ability to withstand high peak voltage or peak current surge pulses. This capability depends on all internal connections of the film capacitor withstanding the peak current loads up to the maximum specified temperature. The collateral contact layers (schoopage) with the electrodes can be a potential limitation of peak current carrying capacity. The electrode layers are wound slightly offset from each other, so that the edges of the electrodes can be contacted using a face contacting method "schoopage" at the collateral end faces of the winding. This internal connection is ultimately made by multiple point-shaped contacts at the edge of the electrode, and can be modeled as a large number of individual capacitors all connected in parallel. The many individual resistance (ESR) and inductance (ESL) losses are connected in parallel, so that these total undesirable parasitic losses are minimized. However, ohmic contact resistance heating is generated when peak current flows through these individual microscopic contacting points, which are critical areas for the overall internal resistance of the capacitor. If the current gets too high, "hot spots" can develop and cause burning of the contact areas. A second limitation of the current-carrying capacity is caused by the ohmic bulk resistance of the electrodes themselves. For metallized film capacitors, which have layer thicknesses from 0.02 to 0.05 μm the current-carrying capacity is limited by these thin layers. The surge current rating of film capacitors can be enhanced by various internal configurations. Because metallization is the cheapest way of producing electrodes, optimizing the shape of the electrodes is one way to minimize the internal resistance and to increase the current-carrying capacity. A slightly thicker metallization layer at the schoopage contact sides of the electrodes results in a lower overall contact resistance and increased surge current handling, without losing the self-healing properties throughout the remainder of the metallization. Another technique to increase the surge current rating for film capacitors is a double-sided metallization. This can double the peak current rating. This design also halves the total self-inductance of the capacitor, because in effect, two inductors are connected in parallel, which allows less-unimpeded passage of faster pulses (higher so-called "dV/dt" rating). The double-sided metallized film is electrostatically field-free because the electrodes have the same voltage potential on both sides of the film, and therefore does not contribute to the total capacitance of the capacitor. This film can therefore be made of a different and less expensive material. For example, a polypropylene film capacitor with double-sided metallization on a polyester film carrier makes the capacitor not only cheaper but also smaller, because the thinner polyester foil improves the volumetric efficiency of the capacitor. Film capacitors with a double-sided metallized film effectively have thicker electrodes for higher surge current handling, but still do retain their self-healing properties, in contrast to the film/foil capacitors. Styles of film capacitors Film capacitors for use in electronic equipment are packaged in the common and usual industry styles: axial, radial, and SMD. Traditional axial type packages are less used today, but are still specified for point-to-point wiring and some traditional through-hole printed circuit boards. The most common form factor is the radial type (single ended), with both terminals on one side of the capacitor body. To facilitate automated insertion, radial plastic film capacitors are commonly constructed with terminal spacings at standardized distances, starting with 2.5 mm pitch and increasing in 2.5 mm steps. Radial capacitors are available potted in plastic cases, or dipped in an epoxy resin to protect the capacitor body against environmental influences. Although the transient heat of reflow soldering induces high stress in the plastic film materials, film capacitors able to withstand such temperatures are available in surface-mounted device (SMD) packages. Historical development Before the introduction of plastic films, capacitors made by sandwiching a strip of wax-impregnated paper between strips of metal, and rolling the result into a cylinder–paper capacitors–were commonly used; their manufacture started in 1876, and they were used from the early 20th century as decoupling capacitors in telecommunications (telephony). With the development of plastic materials by organic chemists during the Second World War, the capacitor industry began to replace paper with thinner polymer films. One very early development in film capacitors was described in British Patent 587,953 in 1944. The introduction of plastics in plastic film capacitors was approximately in the following historic order: polystyrene (PS) in 1949, polyethylene terephthalate (PET/"polyester") and cellulose acetate (CA) in 1951, polycarbonate (PC/Lexan) in 1953, polytetrafluoroethylene (PTFE/Teflon) in 1954, polyparylene in 1954, polypropylene (PP) in 1954, polyethylene (PE) in 1958, and polyphenylene sulphide (PPS) in 1967. By the mid-1960s there was a wide range of different plastic film capacitors offered by many, mostly European and US manufacturers. German manufacturers such as WIMA, Roederstein, Siemens and Philips were trend-setters and leaders in a world market driven by consumer electronics. One of the great advantages of plastic films for capacitor fabrication is that plastic films have considerably fewer defects than paper sheets used in paper capacitors. This allows the manufacture of plastic film capacitors with only a single layer of plastic film, whereas paper capacitors need a double layer of paper. Plastic film capacitors were significantly smaller in physical size (better volumetric efficiency), with the same capacitance value and the same dielectric strength as comparable paper capacitors. Then-new plastic materials also showed further advantages compared with paper. Plastic is much less hygroscopic than paper, reducing the deleterious effects of imperfect sealing. Additionally, most plastics are subject to fewer chemical changes over long periods, providing long-term stability of their electrical parameters. Since around 1980, paper and metallized paper capacitors (MP capacitors) have almost completely been replaced by PET film capacitors for most low-power DC electronic applications. Paper is now used only in RFI suppression or motor run capacitors, or as a mixed dielectric combined with polypropylene films in large AC and DC capacitors for high-power applications. An early special type of plastic film capacitors were the cellulose acetate film capacitors, also called MKU capacitors. The polar insulating dielectric cellulose acetate was a synthetic resin that could be made for metallized capacitors in paint film thickness down to about 3 μm. A liquid layer of cellulose acetate was first applied to a paper carrier, then covered with wax, dried and then metallized. During winding of the capacitor body, the paper was removed from the metallized film. The remaining thin cellulose acetate layer had a dielectric breakdown of 63 V, enough for many of general purpose applications. The very small thickness of the dielectric decreased the overall dimensions of these capacitors compared to other film capacitors of the time. MKU film capacitors are no longer manufactured, because polyester film capacitors can now be produced in the smaller sizes that were the market niche of the MKU type. Film capacitors have become much smaller since the beginning of the technology. Through development of thinner plastic films, for example, the dimensions of metallized polyester film capacitors were decreased by a factor of approximately 3 to 4. The most important advantages of film capacitors are the stability of their electrical values over long durations, their reliability, and lower cost than some other types for the same applications. Especially for applications with high current pulse loads or high AC loads in electrical systems, heavy-duty film capacitors, here called "power capacitors", are available with dielectric ratings of several kilovolts. But the manufacture of film capacitors does have a critical dependency on the materials supply chain. Each of the plastic film materials used for film capacitors worldwide is produced by only two or three large suppliers. The reason for this is that the mass quantities required by the market for film caps are quite small compared to typical chemical company production runs. This leads to a great dependency of the capacitor manufacturers on relatively few chemical companies as raw material suppliers. For example, in the year 2000 Bayer AG stopped their production of polycarbonate films, due to unprofitably low sales volumes. Most of the producers of polycarbonate film capacitors had to quickly change their product offerings to another type of capacitor, and a lot of expensive testing approvals for new designs were required. As of 2012, only five plastic materials continued to be widely used in the capacitor industry as films for capacitors: PET, PEN, PP, PPS and PTFE. Other plastic materials are no longer in common use, either because they are no longer manufactured, or they have been replaced with better materials. Even the long-time manufactured polystyrene (PS) and polycarbonate (PC) film capacitors have been largely replaced by the previously mentioned film types, though at least one PC capacitor manufacturer retains the ability to make its own films from raw polycarbonate feedstock. The less-common plastic films are described briefly here, since they are still encountered in older designs, and are still available from some suppliers. From simple beginnings film capacitors developed into a very broad and highly specialized range of different types. By the end of the 20th century mass production of most film capacitors had shifted to the Far East. A few large companies still produce highly specialized film capacitors in Europe and in the US, for power and AC applications. Dielectric materials and their market share The following table identifies the most commonly used dielectric polymers for film capacitors. Also, different film materials can be mixed to produce capacitors with particular properties. The most used film materials are polypropylene, with a market share of 50%, followed by polyester, with a 40% share. The remaining 10% share is accounted for by the other dielectric materials, including polyphenylene sulfide and paper, with roughly 3% each. Polycarbonate film capacitors are no longer manufactured because the dielectric material is no longer available. Characteristics of film materials for film capacitors The electrical characteristics, and the temperature and frequency behavior of film capacitors are essentially determined by the type of material that forms the dielectric of the capacitor. The following table lists the most important characteristics of the principal plastic film materials in use today. Characteristics of mixed film materials are not listed here. The figures in this table are extracted from specifications published by various different manufacturers of film capacitors for industrial electronic applications. The large range of values for the dissipation factor includes both typical and maximum specifications from data sheets of the various manufacturers. Typical electrical values for power and large AC capacitors were not included in this table. Polypropylene (PP) film capacitors Polypropylene film capacitors have a dielectric made of the thermoplastic, non-polar, organic and partially crystalline polymer material Polypropylene (PP), trade name Treofan, from the family of polyolefins. They are manufactured both as metallized wound and stacked versions, as well as film/foil types. Polypropylene film is the most-used dielectric film in industrial capacitors and also in power capacitor types. The polypropylene film material absorbs less moisture than polyester film and is therefore also suitable for "naked" designs without any coating or further packaging. But the maximum temperature of 105 °C hinders use of PP films in SMD packaging. The temperature and frequency dependencies of electrical parameters for polypropylene film capacitors are very low. Polypropylene film capacitors have a linear, negative temperature coefficient of capacitance of ±2,5 % within their temperature range. Therefore, polypropylene film capacitors are suitable for applications in Class 1 frequency-determining circuits, filters, oscillator circuits, audio circuits, and timers. They are also useful for compensation of inductive coils in precision filter applications, and for high-frequency applications. In addition to the application class qualification for the film/foil version of PP film capacitors, the standard IEC/EN 60384-13 specifies three "stability classes". These stability classes specify the tolerance on temperature coefficients together with the permissible change of capacitance after defined tests. They are divided into different temperature coefficient grades (α) with associated tolerances and preferred values of permissible change of capacitance after mechanical, ambient (moisture) and life time tests. The table is not valid for capacitance values smaller than 50 pF. In addition, PP film capacitors have the lowest dielectric absorption, which makes them suitable for applications such as VCO timing capacitors, sample-and-hold applications, and audio circuits. They are available for these precision applications in very narrow capacitance tolerances. The dissipation factor of PP film capacitors is smaller than that of other film capacitors. Due to the low and very stable dissipation factor over a wide temperature and frequency range, even at very high frequencies, and their high dielectric strength of 650 V/μm, PP film capacitors can be used in metallized and in film/foil versions as capacitors for pulse applications, such as CRT-scan deflection circuits, or as so-called "snubber" capacitors, or in IGBT applications. In addition, polypropylene film capacitors are used in AC power applications, such as motor run capacitors or power-factor correction (PFC) capacitors. Polypropylene film capacitors are widely used for EMI suppression, including direct connection to the power supply mains. In this latter application, they must meet special testing and certification requirements concerning safety and non-flammability. Most power capacitors, the largest capacitors made, generally use polypropylene film as the dielectric. PP film capacitors are used for high-frequency high-power applications such as induction heating, for pulsed power energy discharge applications, and as AC capacitors for electrical distribution. The AC voltage ratings of these capacitors can range up to 400 kV. The relatively low permittivity of 2.2 is a slight disadvantage, and PP film capacitors tend to be somewhat physically larger than other film caps. The capacitor grade films are produced up to 20 μm in thickness with width of roll up to 140 mm. Rolls are carefully vacuum packed in pairs according to the specifications required for the capacitor. Polyester (PET) film capacitors Polyester film capacitors are film capacitors using a dielectric made of the thermoplastic polar polymer material polyethylene terephthalate (PET), trade names Hostaphan or Mylar, from the polyester family. They are manufactured both as metallized wound and stacked versions, as well as film/foil types. The polyester film adsorbs very little moisture, and this feature makes it suitable for "naked" designs without any further coating needed. They are the low-cost mass-produced capacitors in modern electronics, featuring relatively small dimensions with relatively high capacitance values. PET capacitors are mainly used as general purpose capacitors for DC applications, or for semi-critical circuits with operating temperatures up to 125  °C. The maximum temperature rating of 125 °C also allows SMD film capacitors to be made with PET films. The low cost of polyester and the relatively compact dimensions are the main reasons for the high prevalence of PET film capacitors in modern designs. The small physical dimensions of PET film capacitors are the result of a high relative permittivity of 3.3, combined with a relatively high dielectric strength leads to a relatively high volumetric efficiency. This advantage of compactness comes with some disadvantages. The capacitance temperature dependence of polyester film capacitors is relatively high compared to other film capacitors, ±5% over the entire temperature range. The capacitance frequency dependence of polyester film capacitors compared with the other film capacitors is -3% in the range from 100 Hz to 100 kHz at the upper limit. Also, the temperature and frequency dependence of the dissipation factor are higher for polyester film capacitors compared with the other film capacitor types. Polyester film capacitors are mainly used for general purpose applications or semi-critical circuits with operating temperatures up to 125 °C. Polyethylene naphthalate (PEN) film capacitors Polyethylene naphthalate film capacitors are film capacitors using a dielectric made of the thermoplastic biaxial polymer material polyethylene naphthalate (PEN), trade names Kaladex, Teonex. They are produced only as metallized types. PEN, like PET, belongs to the polyester family, but has better stability at high temperatures. Therefore, PEN film capacitors are more suitable for high temperature applications and for SMD packaging. The temperature and frequency dependence of the electrical characteristics for capacitance and dissipation factor of PEN film capacitors are similar to the PET film capacitors. Because of the smaller relative permittivity and lower dielectric strength of the PEN polymer, PEN film capacitors are physically larger for a given capacitance and rated voltage value. In spite of this, PEN film capacitors are preferred over PET when the ambient temperature during operation of the capacitors is permanently above 125 °C. The special PEN "high voltage" (HV) dielectric offers excellent electrical properties during the life tests at high voltages and high temperatures (175 °C). PEN capacitors are mainly used for non-critical filtering, coupling and decoupling in electronic circuits, when the temperature dependencies do not matter. Polyphenylene sulfide (PPS) film capacitors Polyphenylene sulfide film capacitors are film capacitors with dielectric made of the thermoplastic, organic, and partially crystalline polymer material Poly(p-phenylene sulfide) (PPS), trade name Torelina. They are only produced as metallized types. The temperature dependence of the capacitance of PPS film capacitors over the entire temperature range is very small (± 1.5%) compared with other film capacitors. Also the frequency dependence in the range from 100 Hz to 100 kHz of the capacitance of the PPS film capacitors is ± 0.5%, very low compared with other film capacitors. The dissipation factor of PPS film capacitors is quite small, and the temperature and frequency dependence of the dissipation factor over a wide range is very stable. Only at temperatures above 100 °C does the dissipation factor increase to larger values. The dielectric absorption performance is excellent, behind only PTFE and PS dielectric capacitors. Polyphenylene sulfide film capacitors are well-suited for applications in frequency-determining circuits and for high-temperature applications. Because of their good electrical properties, PPS film capacitors are an ideal replacement for polycarbonate film capacitors, whose production since 2000 has been largely discontinued. In addition to their excellent electrical properties, PPS film capacitors can withstand temperatures up to 270 °C without damaging the film quality, so that PPS film capacitors are suitable for surface mount devices (SMD), and can tolerate the increased reflow soldering temperatures for lead-free soldering mandated by the RoHS 2002/95/EC directive. Cost of a PPS film capacitor is usually higher compared to a PP film capacitor. Polytetrafluoroethylene (PTFE) film capacitors Polytetrafluoroethylene film capacitors are made with a dielectric of the synthetic fluoropolymer polytetrafluoroethylene (PTFE), a hydrophobic solid fluorocarbon. They are manufactured both as metallized and as film/foil types, although poor adherence to the film makes metallization difficult. PTFE is often known by the DuPont trademark Teflon. Polytetrafluoroethylene film capacitors feature a very high temperature resistance up to 200 °C, and even further up to 260 °C, with a voltage derating. The dissipation factor 2 • 10 −4 is quite small. The change in capacitance over the entire temperature range of +1% to -3% is a little bit higher than for polypropylene film capacitors. However, since the smallest available film thickness for PTFE films is 5.5 μm, approximately twice of the thickness of polypropylene films, the PTFE film capacitors are physically bulkier than PP film capacitors. It added that the film thickness on the surface is not constant, so that Teflon films are difficult to produce. Therefore, the number of manufacturers of PTFE film capacitors is limited. PTFE film capacitors are available with rated voltages of 100 V to 630 V DC. They are used in military equipment, in aerospace, in geological probes, in burn-in circuits and in high-quality audio circuits. Main producers of PTFE film capacitors are located in the USA. Polystyrene (PS) film capacitors Polystyrene film capacitors, sometimes known as "Styroflex Capacitors", were well known for many years as inexpensive film capacitors for general purpose applications, in which high capacitance stability, low dissipation factor and low leakage currents were needed. But because the film thickness could be not made thinner than 10 μm, and the maximum temperature ratings reached only 85 °C, the PS film capacitors have mostly been replaced by polyester film capacitors as of 2012. However, some manufacturers may still offer PS film capacitors in their production program, backed by large amounts of polystyrene film stocked in their warehouse. Polystyrene capacitors have an important advantage - they have a temperature coefficient near zero and so are useful in tuned circuits where drift with temperature must be avoided. Polycarbonate (PC) film capacitors Polycarbonate film capacitors are film capacitors with a dielectric made of the polymerized esters of carbonic acid and dihydric alcohols polycarbonate (PC), sometimes given the trademarked name Makrofol. They are manufactured as wound metallized as well as film/foil types. These capacitors have a low dissipation factor and because of their relatively temperature-independent electrical properties of about ±80 ppm over the entire temperature range, they had many applications for low-loss and temperature-stable applications such as timing circuits, precision analog circuits, and signal filters in applications with tough environmental conditions. PC film capacitors had been manufactured since the mid-1950s, but the main supplier of polycarbonate film for capacitors had ceased the production of this polymer in film form as of the year 2000. As a result, most of the manufacturers of polycarbonate film capacitors worldwide had to stop their production of PC film capacitors and changed to polypropylene film capacitors instead. Most of the former PC capacitor applications have found satisfactory substitutes with PP film capacitors. However, there are exceptions. The manufacturer Electronic Concepts Inc, (New Jersey, US) claims to be an in-house producer of its own polycarbonate film, and continues to produce PC film capacitors. In addition to this manufacturer of polycarbonate film capacitors, there are other mostly US-based specialty manufacturers. Paper (film) capacitors (MP) and mixed film capacitors Historically, the first "film" type capacitors were paper capacitors of film/foil configuration. They were fairly bulky, and not particularly reliable. As of 2012, paper is used in the form of metallized paper for MP capacitors with self-healing properties used for EMI suppression. Paper is also used as an insulating mechanical carrier of metallized-layer electrodes, and combined with polypropylene dielectric, mostly in power capacitors rated for high current AC and high voltage DC applications. Paper as carrier of the electrodes has the advantages of lower cost and somewhat better adherence of metallization to paper than to polymer films. But paper alone as dielectric in capacitors is not reliable enough for the growing quality requirements of modern applications. The combination of paper together with polypropylene film dielectric is a cost-effective way to improve quality and performance. The better adhering of metallization on paper is advantageous especially at high current pulse loads, and the polypropylene film dielectric increases the voltage rating. However, the roughness of a metallized paper surface can cause many small air-filled bubbles between the dielectric and the metallization, decreasing the breakdown voltage of the capacitor. For this reason, larger film capacitors or power capacitors using paper as carrier of the electrodes usually are filled with an insulating oil or gas, to displace the air bubbles for a higher breakdown voltage. However, since almost every major manufacturer offers its own proprietary film capacitors with mixed film materials, it is difficult to give a universal and general overview of the specific properties of mixed film capacitors. Other plastic film capacitors Other plastic materials than those described above may be used as the dielectric in film capacitors. Thermoplastic polymers such as Polyimide (PI), Polyamide (PA, better known as Nylon or Perlon), Polyvinylidene fluoride (PVDF), Siloxane, Polysulfone (PEx) and Aromatic Polyester (FPE) are described in the technical literature as possible dielectric films for capacitors. The primary reason for considering new film materials for capacitors is the relative low permittivity of commonly used materials. With a higher permittivity, film capacitors could be made even smaller, an advantage in the market for more-compact portable electronic devices. In 1984, a new film capacitor technology that makes use of vacuum-deposited electron-beam cross-linked acrylate materials as dielectric in film capacitors was announced as a patent in the press. But as of 2012, only one manufacturer markets a specific acrylate SMD film capacitor, as an X7R MLCC replacement. Polyimide (PI), a thermoplastic polymer of imide monomers, is proposed for film capacitors called Polyimide-, PI- or Kapton capacitors. Kapton is the trade name of polyimide from DuPont. This material is of interest because its high temperature resistance up to 400 °C. But as of 2012, no specific PI capacitor series film capacitors have been announced. The offered film capacitor, Kapton CapacitorCL11, announced from "dhgate" is a "Type: Polypropylene Film Capacitor". Another very strange Kapton capacitor can be found at YEC, a Chinese producer of capacitors. Here the announced "Kapton capacitors" are in reality supercapacitors, a completely different technology. Perhaps the Kapton film in these supercapacitors is used as a separator between the electrodes of this double-layer capacitor. Kapton films are often offered as an adhesive film for the outer insulation of capacitor packages. Polyvinylidene fluoride (PVDF) has a very high permittivity of 18 to 20, which allows large amounts of energy to be stored in a small space (volumetric efficiency). However, it has a Curie temperature of only 60 °C, which limits its usability. Film capacitors with PVDF are described for one very special application, in portable defibrillators. As of 2012, for all the other mentioned materials such as PA, PVDF, Siloxane, PEx or FPE, specific series of film capacitors with these plastic films are not known to be produced in commercial quantities. Standardization of film capacitors The standardization for all electrical, electronic components and related technologies follows the rules given by the International Electrotechnical Commission (IEC), a non-profit, non-governmental international standards organization. The IEC standards are harmonized with European standards EN. The definition of the characteristics and the procedure of the test methods for capacitors for use in electronic equipment are set out in the generic specification: IEC/EN 60384–1, Fixed capacitors for use in electronic equipment - Part 1: Generic specification The tests and requirements to be met by film capacitors for use in electronic equipment for approval as standardized types are set out in the following sectional specifications: The standardization of power capacitors is strongly focused on rules for the safety of personnel and equipment, given by the local regulating authority. The concepts and definitions to guarantee safe application of power capacitors are published in the following standards: IEC/EN 61071; Capacitors for power electronics IEC/EN 60252-1; AC motor capacitors. General. Performance, testing and rating. Safety requirements. Guidance for installation and operation IEC/EN 60110-1; Power capacitors for induction heating installations - General IEC/EN 60567; Oil-filled electrical equipment - Sampling of gases and of oil for analysis of free and dissolved gases – Guidance IEC/EN 60143-1; Series capacitors for power systems. General IEC/EN 60143-2; Series capacitors for power systems. Protective equipment for series capacitor banks IEC/EN 60143–3; Series capacitors for power systems - Internal fuses IEC/EN 60252-2; AC motor capacitors. Motor start capacitors IEC/EN 60831-1; Shunt power capacitors of the self-healing type for a.c. systems having a rated voltage up to and including 1kV. General. Performance, testing and rating. Safety requirements. Guide for installation and operation IEC/EN 60831-2; Shunt power capacitors of the self-healing type for a.c. systems having a rated voltage up to and including 1000 V. Ageing test, self-healing test and destruction test IEC/EN 60871-1; Shunt capacitors for a.c. power systems having a rated voltage above 1000 V. General IEC/EN 60931-1; Shunt power capacitors of the non-self-healing type for a.c. systems having a rated voltage up to and including 1 kV - General - Performance, testing and rating - Safety requirements - Guide for installation and operation IEC/EN 60931-2; Shunt power capacitors of the non-self-healing type for a.c. systems having a rated voltage up to and including 1000 V. Ageing test and destruction test IEC 60143-4; Series capacitors for power systems. Thyristor controlled series capacitors IEC/EN 61921; Power capacitors. Low-voltage power factor correction banks IEC/EN 60931-3; Shunt power capacitors of the non-self-healing type for a.c. systems having a rated voltage up to and including 1000 V. Internal fuses IEC/EN 61881-1; Railway applications. Rolling stock equipment. Capacitors for power electronics. Paper/plastic film capacitors IEC 62146-1; Grading capacitors for high-voltage alternating current circuit-breakers The text above is directly extracted from the relevant IEC standards, which use the abbreviations "d.c." for Direct Current (DC) and "a.c." for Alternating Current (AC). Film capacitors type abbreviations During the early development of film capacitors, some large manufacturers have tried to standardize the names of different film materials. This resulted in a former German standard (DIN 41 379), since withdrawn, in which an abbreviated code for each material and configuration type were prescribed. Many manufacturers continue to use these de facto standard abbreviations. However, with the relocation of mass-market business in the passive components industry, which includes film capacitors, many of the new manufacturers in the Far East use their own abbreviations that differ from the previously established abbreviations. Electrical characteristics The manufacturers Wima, Vishay and TDK Epcos specify the electrical parameters of their film capacitors in a general technical information sheet. Series-equivalent circuit The electrical characteristics of capacitors are harmonized by the international generic specification IEC/EN 60384–1. In this standard, the electrical characteristics of capacitors are described by an idealized series-equivalent circuit with electrical components which model all ohmic losses, capacitive and inductive parameters of a film capacitor: C, the capacitance of the capacitor, Risol, the insulation resistance of the dielectric, RESR, the equivalent series resistance which summarizes all ohmic losses of the capacitor, usually abbreviated as "ESR". LESL, the equivalent series inductance which is the effective self-inductance of the capacitor, usually abbreviated as "ESL". The two reactive resistances have following relations with the angular frequency "ω": Capacitance (Capacitive reactance) : Inductance(Inductive reactance): Capacitance standard values and tolerances The rated capacitance is the value for which the capacitor has been designed. The actual capacitance of film capacitors depends on the measuring frequency and the ambient temperature. Standardized conditions for film capacitors are a measuring frequency of 1 kHz and a temperature of 20 °C. The percentage of allowed deviation of the capacitance from the rated value is called capacitance tolerance. The actual capacitance value of a capacitor should be within the tolerance limits, or the capacitor is out of specification. Film capacitors are available in different tolerance series, whose values are specified in the E series standards specified in IEC/EN 60063. For abbreviated marking in tight spaces, a letter code for each tolerance is specified in IEC/EN 60062. rated capacitance, E96 series, tolerance ±1%, letter code "F" rated capacitance, E48 series, tolerance ±2%, letter code "G" rated capacitance, E24 series, tolerance ±5%, letter code "J" rated capacitance, E12 series, tolerance ±10%, letter code "K" rated capacitance, E6 series, tolerance ±20%, letter code "M" The required capacitance tolerance is determined by the particular application. The narrow tolerances of E24 to E96 will be used for high-quality circuits like precision oscillators and timers. On the other hand, for general applications such as non-critical filtering or coupling circuits, the tolerance series E12 or E6 are sufficient. Frequency and temperature changes in capacitance The different film materials have temperature- and frequency-dependent differences in their characteristics. The graphs below show typical temperature and frequency behavior of the capacitance for different film materials. Voltage ratings DC voltage The rated DC voltage VR is the maximum DC voltage, or peak value of pulse voltage, or the sum of an applied DC voltage and the peak value of a superimposed AC voltage, which may be applied continuously to a capacitor at any temperature between the category temperature and the rated temperature. The breakdown voltage of film capacitors decreases with increasing temperature. When using film capacitors at temperatures between the upper rated temperature and the upper category temperature, only a temperature-derated category voltage VC is allowed. The derating factors apply to both DC and AC voltages. Some manufacturers may have quite different derating curves for their capacitors compared with the generic curves given in the picture at the right. The allowable peak value of a superimposed alternating voltage, called the "rated ripple voltage", is frequency-dependent. The applicable standards specify the following conditions, regardless of the type of dielectric film. AC voltage and current Film capacitors are not polarized and are suitable for handling an alternating voltage. Because the rated AC voltage is specified as an RMS value, the nominal AC voltage must be smaller than the rated DC voltage. Typical figures for DC voltages and nominally related AC voltages are given in the table below: An AC voltage will cause an AC current (with an applied DC bias this is also called "ripple current"), with cyclic charging and discharging of the capacitor causing oscillating motion of the electric dipoles in the dielectric. This results in dielectric losses, which are the principal component of the ESR of film capacitors, and which produce heat from the alternating current. The maximum RMS alternating voltage at a given frequency which may be applied continuously to a capacitor (up to the rated temperature) is defined as the rated AC voltage UR AC. Rated AC voltages usually are specified at the mains frequency of a region (50 or 60 Hz). The rated AC voltage is generally calculated so that an internal temperature rise of 8 to 10 K sets the allowed limit for film capacitors. These losses increase with increasing frequency, and manufacturers specify curves for derating maximum AC voltages permissible at higher frequencies. Capacitors, including film types, designed for continuous operation at low-frequency (50 or 60 Hz) mains voltage, typically between line and neutral or line and ground for interference suppression, are required to meet standard safety ratings; e.g., X2 is designed to operate between line and neutral at 200-240 VAC, and Y2 between line and ground. These types are designed for reliability, and, in case of failure, to fail safely (open-, rather than short-circuit). A non-catastrophic failure mode in this application is due to the corona effect: the air enclosed in the winding element becomes ionized and consequently more conductive, allowing partial discharges on the metallized surface of the film, which causes local vaporization of the metallization. This occurs repeatedly, and can cause significant loss of capacitance (C-decay) over one or two years. International standard IEC60384-14 specifies a limit of 10% C-decay per 1,000 test hours (41 days of permanent connection). Some capacitors are designed to minimise this effect. One method, at the expense of increased size and cost, is for a capacitor operating at 200-240 VAC to consist internally of two parts in series, each at a voltage of 100-120 VAC, insufficient to cause ionisation. Manufacturers also adopt cheaper and smaller construction intended to avoid corona effect without series-connected sections, for example minimising enclosed air. Surge ratings For metallized film capacitors, the maximum possible pulse voltage is limited because of the limited current-carrying capacity between contact of the electrodes and the electrodes themselves. The rated pulse voltage Vp is the peak value of the pulse voltage which may be applied continuously to a capacitor at the rated temperature and at a given frequency. The pulse voltage capacity is given as pulse voltage rise time dV/dT in V/μs and also implies the maximum pulse current capacity. The values on the pulse rise time refer to the rated voltage. For lower operating voltages, the permissible pulse rise times may decrease. The permissible pulse load capacity of a film capacitor is generally calculated so that an internal temperature rise of 8 to 10 K is acceptable. The maximum permissible pulse rise time of film capacitors which may be applied within the rated temperature range is specified in the relevant data sheets. Exceeding the maximum specified pulse load can lead to the destruction of the capacitor. For each individual application, the pulse load must be calculated. A general rule for calculating the power handling of film capacitors is not available because of vendor-related differences stemming from the internal construction details of different capacitors. Therefore, the calculation procedure of the manufacturer WIMA is referenced as an example of the generally applicable principles. Impedance, dissipation factor, and ESR Impedance The impedance is the complex ratio of the voltage to the current in an alternating current (AC) circuit at a given frequency. In data sheets of film capacitors, only the magnitude of the impedance |Z| will be specified, and simply written as "Z". The phase of the impedance is specified as dissipation factor . If the series-equivalent values of a capacitor and and , and the frequency are known, then the impedance can be calculated with these values. The impedance is then the sum of the geometric (complex) addition of the real and the reactive resistances. In the special case of resonance, in which the both reactive resistances and have the same value (), then the impedance will only be determined by . The impedance is a measure of the ability of the capacitor to pass alternating currents. The lower the impedance, the more easily alternating currents can be passed through the capacitor. Film capacitors are characterized by very small impedance values and very high resonant frequencies, especially when compared to electrolytic capacitors. Dissipation factor (tan δ) and ESR The equivalent series resistance (ESR) summarizes all resistive losses of the capacitor. These are the supply line resistances, the contact resistance of the electrode contact, the line resistance of the electrodes and the dielectric losses in the dielectric film. The largest share of these losses is usually the dissipative losses in the dielectric. For film capacitors, the dissipation factor tan δ will be specified in the relevant data sheets, instead of the ESR. The dissipation factor is determined by the tangent of the phase angle between the capacitive reactance XC minus the inductive reactance XL and the ESR. If the inductance ESL is small, the dissipation factor can be approximated as: This reason for using the dissipation factor instead of the ESR is, that film capacitors were originally used mainly in frequency-determining resonant circuits. The reciprocal value of the dissipation factor is defined as the quality factor "Q". A high Q value is for resonant circuits a mark of the quality of the resonance. The dissipation factor for film/foil capacitors is lower than for metallized film capacitors, due to lower contact resistance to the foil electrode compared to the metallized film electrode. The dissipation factor of film capacitors is frequency-, temperature- and time-dependent. While the frequency- and temperature-dependencies arise directly from physical laws, the time dependence is related to aging and moisture adsorption processes. Insulation resistance A charged capacitor discharges over time through its own internal insulation resistance Risol. The multiplication of the insulation resistance together with the capacitance of the capacitor results in a time constant which is called the "self-discharge time constant": (τisol = Risol•C). This is a measure of the quality of the dielectric with respect to its insulating properties, and is dimensioned in seconds. Usual values for film capacitors range from 1000 s up to 1,000,000 s. These time constants are always relevant if capacitors are used as time-determining elements (such as timing delay), or for storing a voltage value as in sample-and-hold circuits or integrators. Dielectric absorption (soakage) Dielectric absorption is the name given to the effect by which a capacitor that has been charged for a long time discharges only incompletely when briefly discharged. It is a form of hysteresis in capacitor voltages. Although an ideal capacitor would remain at zero volts after being discharged, real capacitors will develop a small residual voltage, a phenomenon that is also called "soakage". The following table lists typical values of the dielectric absorption for common film materials Polypropylene film capacitors have the lowest voltage values generated by dielectric absorption. Therefore, they are ideally suited for precision analog circuits, or for integrators and sample-and-hold circuits. Aging Film capacitors are subject to certain very small but measurable aging processes. The primary degradation process is a small amount of plastic film shrinkage, which occurs mainly during the soldering process, but also during operation at high ambient temperatures or at high current load. Additionally, some moisture absorption in the windings of the capacitor may take place under operating conditions in humid climates. Thermal stress during the soldering process can change the capacitance value of leaded film capacitors by 1% to 5% from initial value, for example. For surface mount devices, the soldering process may change the capacitance value by as much as 10%. The dissipation factor and insulation resistance of film capacitors may also be changed by the above-described external factors, particularly by moisture absorption in high humidity climates. The manufacturers of film capacitors can slow the aging process caused by moisture absorption, by using better encapsulation. This more expensive fabrication processing may account for the fact that film capacitors with the same basic body design can be supplied in different life time stability ratings called Performance grades. Performance grade 1 capacitors are "longlife", Performance grade 2 capacitors are "general purpose" capacitors. The specifications behind this grades are defined in the relevant standard of IEC/EN 60384-x (see standards). The permissible changes of capacitance, dissipation factor and insulation resistance vary with the film material, and are specified in the relevant data sheet. Variations over the course of time which exceed the specified values are considered as a degradation failure. Failure rate and life expectancy Film capacitors generally are very reliable components with very low failure rates, with predicted life expectancies of decades under normal conditions. The life expectancy for film capacitors is usually specified in terms of applied voltage, current load, and temperature. Markings Color-coded film capacitors have been produced, but it is usual to print more detailed information on the body. According to the IEC standard 60384.1, capacitors should be marked with imprints of the following information: rated capacitance rated voltage tolerance category voltage year and month (or week) of manufacture manufacturer's name or trade mark climatic category manufacturer's type designation Mains-voltage RFI suppression capacitors must also be marked with the appropriate safety agency approvals. Capacitance, tolerance, and date of manufacture can be marked with short codes. Capacitance is often indicated with the sub-multiple indicator replacing an easily erased decimal point, as: n47 = 0.47 nF, 4n7 = 4.7 nF, 47n = 47 nF Applications In comparison with the other two main capacitor technologies, ceramic and electrolytic capacitors, film capacitors have properties that make them particularly well suited for many general-purpose and industrial applications in electronic equipment. Two main advantages of film capacitors are very low ESR and ESL values. Film capacitors are physically larger and more expensive than aluminum electrolytic capacitors (e-caps), but have much higher surge and pulse load capabilities. As film capacitors are not polarized, they can be used in AC voltage applications without DC bias, and they have much more stable electrical parameters. Polypropylene film capacitors have relatively little temperature dependence of capacitance and dissipation factor, so that they can be applied in frequency-stable Class 1 applications, replacing Class 1 ceramic capacitors. Electronic circuits Polypropylene film capacitors meet the criteria for stability Class 1 capacitors, and have low electrical losses and nearly linear behavior over a very wide temperature and frequency range. They are used for oscillators and resonant circuits; for electronic filter applications with high quality factor (Q) such as high-pass filters, low-pass filters and band-pass filters as well as for tuning circuits; for audio crossovers in loudspeakers; in sample and hold A/D converters and in peak voltage detectors. Tight capacitance tolerances are required for timing applications in signal lights or pulse width generators to control the speed of motors, PP film capacitors are also well-suited because of their very low leakage current. Class 1 PP film capacitors are able to handle higher current than stability Class 1 ceramic capacitors. The precise negative temperature characteristics of polypropylene make PP capacitors useful to compensate for temperature-induced changes in other components. Fast pulse rise time rating, high dielectric strength (breakdown voltage), and low dissipation factor (high Q) are the reasons for the use of polypropylene film capacitors in fly-back tuning and S-correction applications in older CRT tube television and display equipment. For similar reasons, PP film capacitors, often in versions with special terminals for high peak currents, work well as snubbers for power electronic circuits. Because of their high pulse surge capabilities, PP capacitors are suitable for use in applications where high-current pulses are needed, such as in time-domain reflectometer (TDR) cable fault locators, in welding machines, defibrillators, in high-power pulsed lasers, or to generate high-energy light or X-ray flashes. In addition, polypropylene film capacitors are used in many AC applications such as phase shifters for PFC in fluorescent lamps, or as a motor-run capacitors. For simple higher-frequency filter circuits, or in voltage regulator or voltage doubler circuits, low-cost metallized polyester film capacitors provide long-term stability, and can replace more-expensive tantalum capacitors. Because capacitors pass AC signals but block DC, film capacitors with their high insulation resistance and low self-inductance are well-suited as signal coupling capacitors for higher frequencies. For similar reasons, film capacitors are widely used as decoupling capacitors to suppress noise or transients. Film capacitors made with lower-cost plastics are used for non-critical applications which do not require ultra-stable characteristics over a wide temperature range, such as for smoothing or AC signal coupling. Polyester film (KT) capacitors of the "stacked" type are often used now instead of polystyrene capacitors (KS), which have become less available. Metallized film capacitors have self-healing properties, and small imperfections do not lead to the destruction of the component, which makes these capacitors suitable for RFI/EMI suppression capacitors with fault protection against electrical shock and flame propagation, although repeated corona discharges which self-heal can lead to significant loss of capacitance. PTFE film capacitors are used in applications that must withstand extremely high temperatures. such as in military equipment, in aerospace, in geological probes, or burn-in circuits. Safety and EMI/RFI suppression film capacitors Electromagnetic interference (EMI) or radio-frequency interference (RFI) suppression film capacitors, also known as "AC line filter safety capacitors" or "Safety capacitors", are used as crucial components to reduce or suppress electrical noise caused by the operation of electrical or electronic equipment, while also providing limited protection against electrical shocks. A suppression capacitor is an effective interference reduction component because its electrical impedance decreases with increasing frequency, so that at higher frequencies they short circuit electrical noise and transients between the lines, or to ground. They therefore prevent equipment and machinery (including motors, inverters, and electronic ballasts, as well as solid-state relay snubbers and spark quenchers) from sending and receiving electromagnetic and radio frequency interference as well as transients in across-the-line (X capacitors) and line-to-ground (Y capacitors) connections. X capacitors effectively absorb symmetrical, balanced, or differential interference. On the other hand, Y capacitors are connected in a line bypass between a line phase and a point of zero potential, to absorb asymmetrical, unbalanced, or common-mode interference. EMI/RFI suppression capacitors are designed and installed so that remaining interference or electrical noise does not exceed the limits of EMC directive EN 50081 Suppression components are connected directly to mains voltage semi-permanently for 10 to 20 years or more, and are therefore exposed to overvoltages and transients which could damage the capacitors. For this reason, suppression capacitors must comply with the safety and inflammability requirements of international safety standards such as the following: Europe: EN 60384-14, USA: UL 60384-14, UL 1283 Canada: CAN/CSA-E60384-14, CSA C22.2, No. 8 China: CQC (GB/T 6346.14-2015 or IEC 60384-14) RFI capacitors which fulfill all specified requirements are imprinted with the certification mark of various national safety standards agencies. For power line applications, special requirements are placed on the inflammability of the coating and the epoxy resin impregnating or coating the capacitor body. To receive safety approvals, X and Y powerline-rated capacitors are destructively tested to the point of failure. Even when exposed to large overvoltage surges, these safety-rated capacitors must fail in a fail-safe manner that will not endanger personnel or property. Most EMI/RFI suppression film capacitors are polyester (PET) or metallized polypropylene (PP) film capacitors. However, some types of metallized paper capacitors (MP) are still used for this application, because they still have some advantages in flame resistance. Some safety capacitors have built-in capacitor discharge resistors. Lighting ballasts A lighting ballast is a device to provide proper starting and operating electrical conditions to light one or more fluorescent lamps, while also limiting the amount of current. A familiar and widely used example is the traditional inductive ballast used in fluorescent lamps, to limit the current through the tube, which would otherwise rise to destructive levels due to the tube's negative resistance characteristic. A disadvantage of using an inductor is that current is shifted out of phase with the voltage, producing a poor power factor. Modern electronic ballasts usually change the frequency of the power from a standard mains frequency of 50 or 60  Hz up to 40  kHz or higher, often using a Switched Mode Power Supply (SMPS) circuit topology with PFC. First the AC input power is rectified to DC, and then it is chopped at a high frequency to improve the power factor. In more expensive ballasts, a film capacitor is often paired with the inductor to correct the power factor. In the picture at right, the flat grey rectangular component in the middle of the ballast circuit is a polyester film capacitor used for PFC. Snubber / Damping capacitors Snubber capacitors are designed for the high peak current operation required for protection against transient voltages. Such voltages are caused by the high "di/dt" current slew rate generated in switching power electronics applications. Snubbers are energy-absorbing circuits used to eliminate voltage spikes caused by circuit inductance when a switch opens. The purpose of the snubber is to improve electromagnetic compatibility (EMC) by eliminating the voltage transient that occurs when a switch abruptly opens, or by suppressing sparking of switch contacts (such as an automotive ignition coil with mechanical interrupter), or by limiting the voltage slew rate of semiconductor switches like thyristors, GTO thyristors, IGBTs and bipolar transistors. Snubber capacitors (or higher power "damping capacitors") require a very low self-inductance and very low ESR capacitor construction. These devices are also expected to be highly reliable because, if the snubber RC circuitry fails, a power semiconductor will be destroyed in most cases. Snubber circuits usually incorporate film capacitors, mostly polypropylene film caps. The most important criteria for this application are a low self-inductance, low ESR, and very high peak current capability. The so-called "snubber" capacitors sometimes have some additional special construction features. The self-inductance is reduced by slimmer designs with narrower width of the electrodes. By double-sided metallization or the film/foil construction of the electrodes, the ESR also can be reduced, increasing the peak current capability. Specially widened terminals which can be mounted directly beneath semiconductor packages can help to increase current handling and decrease inductance. The most popular simple snubber circuit consists out of a film capacitor and a resistor in series, connected in parallel with a semiconductor component to suppress or damp undesirable voltage spikes. The capacitor absorbs the inductive turn-off peak current temporarily, so that the resulting voltage spike is limited. But the trend in modern semiconductor technology is towards higher power applications, which increases the peak currents and switching speeds. In this case, the boundary between a standard electronic film capacitor and a power capacitor is blurred, so that larger snubber capacitors belong more in the area of power systems, electrical installations and plants. The overlapping categories of film and power capacitors are visible when they are applied as snubber capacitors in the growing market for high power electronics with IGBT's and thyristors. Although the power capacitors use polypropylene film, like the smaller snubber film capacitors, they belong to the family of power capacitors, and are called "damping" capacitors. Power film capacitors The relatively simple fabrication technique of winding gives film capacitors the possibility of attaining even very large sizes for applications in the high power range, as so-called "power capacitors". Although the materials and the construction of power capacitors are mostly similar to the smaller film capacitors, they are specified and marketed differently for historical reasons. The "film capacitors" were developed together with the growing market of broadcast and electronic equipment technology in the mid-20th century. These capacitors are standardized under the rules of IEC/EN 60384-1 "Capacitors for use in electronic equipment" and different "film materials" have their own sub standards, the IEC/EN 60384-n series. The "power capacitors" begin at a power handling capacity of approximately 200 volt-amps, such as for ballast capacitors in fluorescent lamps. The standardization of power capacitors follows the rules of IEC/EN 61071 and IEC/EN 60143–1, and have for various different applications their own sub standards, such as for railway applications. Power capacitors can be used for a wide variety of applications, even where extremely non-sinusoidal voltages and pulsed currents are present. Both AC and DC capacitors are available. AC capacitors serve as damping or snubbing capacitors when connected in series with a resistor, and are also specified for the damping of undesirable voltage spikes caused by the so-called charge carrier storage effect during switching of power semiconductors. In addition, AC capacitors are used in low-detuned or close-tuned filter circuits for filtering or absorbing harmonics. As pulse discharge capacitors, they are useful in applications with reversing voltages, such as in magnetizing equipment. The scope of application for DC capacitors is similarly diverse. Smoothing capacitors are used to reduce the AC component of fluctuating DC voltage (such as in power supplies for radio and television transmitters), and for high voltage testing equipment, DC controllers, measurement and control technology and cascaded circuits for generation of high DC voltage. Supporting capacitors, DC-filter or buffer circuit capacitors are used for energy storage in intermediate DC circuits, such as in frequency converters for poly-phase drives, and transistor and thyristor power converters. They must be able to absorb and release very high currents within short periods, the peak values of currents being substantially greater than the RMS values. Surge (pulse) discharge capacitors are also capable of supplying or absorbing extreme short-duration current surges. They are usually operated in discharge applications with non-reversing voltages, and at low repetition frequencies, such as in laser technology and lighting generators. Power capacitors can reach quite large physical dimensions. Rectangular housings with internally interconnected individual capacitors can reach sizes of L×W×H = (350×200×1000) mm and above. Advantages Polypropylene film capacitors can qualify for Class 1 applications Very low dissipation factors (tan δ), high quality factors (Q) and low inductance values (ESL) No microphonics compared with ceramic capacitors Metallized construction has self-healing properties High rated voltages, up to the range of kV possible Much higher ripple current, compared with electrolytic capacitors Much lower aging, compared with electrolytic capacitors of similar values High and very high surge current pulses possible Disadvantages Larger physical size compared to electrolytic capacitors Limited number of types in surface-mount technology (SMT) packaging Film/foil types have no self-healing capability (irreversible short circuit) Possibly flammable under overload conditions
Technology
Components
null
47817022
https://en.wikipedia.org/wiki/URL
URL
A uniform resource locator (URL), colloquially known as an address on the Web, is a reference to a resource that specifies its location on a computer network and a mechanism for retrieving it. A URL is a specific type of Uniform Resource Identifier (URI), although many people use the two terms interchangeably. URLs occur most commonly to reference web pages (HTTP/HTTPS) but are also used for file transfer (FTP), email (mailto), database access (JDBC), and many other applications. Most web browsers display the URL of a web page above the page in an address bar. A typical URL could have the form http://www.example.com/index.html, which indicates a protocol (http), a hostname (www.example.com), and a file name (index.html). History Uniform Resource Locators were defined in in 1994 by Tim Berners-Lee, the inventor of the World Wide Web, and the URI working group of the Internet Engineering Task Force (IETF), as an outcome of collaboration started at the IETF Living Documents birds of a feather session in 1992. The format combines the pre-existing system of domain names (created in 1985) with file path syntax, where slashes are used to separate directory and filenames. Conventions already existed where server names could be prefixed to complete file paths, preceded by a double slash (//). Berners-Lee later expressed regret at the use of dots to separate the parts of the domain name within URIs, wishing he had used slashes throughout, and also said that, given the colon following the first component of a URI, the two slashes before the domain name were unnecessary. Early WorldWideWeb collaborators including Berners-Lee originally proposed the use of UDIs: Universal Document Identifiers. An early (1993) draft of the HTML Specification referred to "Universal" Resource Locators. This was dropped some time between June 1994 () and October 1994 (draft-ietf-uri-url-08.txt). In his book Weaving the Web, Berners-Lee emphasizes his preference for the original inclusion of "universal" in the expansion rather than the word "uniform", to which it was later changed, and he gives a brief account of the contention that led to the change. Syntax Every HTTP URL conforms to the syntax of a generic URI. A web browser will usually dereference a URL by performing an HTTP request to the specified host, by default on port number 80. URLs using the https scheme require that requests and responses be made over a secure connection to the website. Internationalized URL Internet users are distributed throughout the world using a wide variety of languages and alphabets, and expect to be able to create URLs in their own local alphabets. An Internationalized Resource Identifier (IRI) is a form of URL that includes Unicode characters. All modern browsers support IRIs. The parts of the URL requiring special treatment for different alphabets are the domain name and path. The domain name in the IRI is known as an Internationalized Domain Name (IDN). Web and Internet software automatically convert the domain name into punycode usable by the Domain Name System; for example, the Chinese URL http://例子.卷筒纸 becomes http://xn--fsqu00a.xn--3lr804guic/. The xn-- indicates that the character was not originally ASCII. The URL path name can also be specified by the user in the local writing system. If not already encoded, it is converted to UTF-8, and any characters not part of the basic URL character set are escaped as hexadecimal using percent-encoding; for example, the Japanese URL http://example.com/引き割り.html becomes http://example.com/%E5%BC%95%E3%81%8D%E5%89%B2%E3%82%8A.html. The target computer decodes the address and displays the page. Protocol-relative URLs Protocol-relative links (PRL), also known as protocol-relative URLs (PRURL), are URLs that have no protocol specified. For example, //example.com will use the protocol of the current page, typically HTTP or HTTPS.
Technology
Internet
null
33466720
https://en.wikipedia.org/wiki/Cycling%20infrastructure
Cycling infrastructure
Cycling infrastructure is all infrastructure cyclists are allowed to use. Bikeways include bike paths, bike lanes, cycle tracks, rail trails and, where permitted, sidewalks. Roads used by motorists are also cycling infrastructure, except where cyclists are barred such as many freeways/motorways. It includes amenities such as bike racks for parking, shelters, service centers and specialized traffic signs and signals. The more cycling infrastructure, the more people get about by bicycle. Good road design, road maintenance and traffic management can make cycling safer and more useful. Settlements with a dense network of interconnected streets tend to be places for getting around by bike. Their cycling networks can give people direct, fast, easy and convenient routes. History The history of cycling infrastructure starts from shortly after the bike boom of the 1880s when the first short stretches of dedicated bicycle infrastructure were built, through to the rise of the automobile from the mid-20th century onwards and the concomitant decline of cycling as a means of transport, to cycling's comeback from the 1970s onwards. Bikeways A bikeway is a lane, route, way or path which in some manner is specifically designed and /or designated for bicycle travel. Bike lanes demarcated by a painted marking are quite common in many cities. Cycle tracks demarcated by barriers, bollards or boulevards are quite common in some European countries such as the Netherlands, Denmark and Germany. They are also increasingly common in major cities elsewhere, such as New York, Melbourne, Ottawa, Vancouver and San Francisco. Montreal and Davis, California, which have had segregated cycling facilities with barriers for several decades, are among the earliest examples in North America. Various guides exist to define the different types of bikeway infrastructure, including UK Department for Transport manual The Geometric Design of Pedestrian, Cycle and Equestrian Routes, Sustrans Design Manual, UK Department of Transport Local Transport Note 2/08: Cycle Infrastructure Design, the Danish Road Authority guide Registration and classification of paths, the Dutch CROW, the American Association of State Highway and Transportation Officials (AASHTO) Guide to Bikeway Facilities, the Federal Highway Administration (FHWA) Manual on Uniform Traffic Control Devices (MUTCD), and the US National Association of City Transportation Officials (NACTO) Urban Bikeway Design Guide. In the Netherlands, the Tekenen voor de fiets design manual recommends a width of at least 2 meters, or 2.5 metres if used by more than 150 bicycles per hour. A minimum width of 2 meters is specified by the cities of Utrecht and 's-Hertogenbosch for new cycle lanes. The Netherlands also has protected intersections to cyclists crossing roads. Terms Some bikeways are separated from motor traffic by physical constraints (e.g. barriers, parking or bollards)—bicycle trail, cycle track—but others are partially separated only by painted markings—bike lane, buffered bike lane, and contraflow bike lane. Some share the roadway with motor vehicles—bicycle boulevard, sharrow, advisory bike lane—or shared with pedestrians—shared use paths and greenways. Segregation The term bikeway is largely used in North America to describe all routes that have been designed or updated to encourage more cycling or make cycling safer. In some jurisdictions such as the United Kingdom, segregated cycling facility is sometimes preferred to describe cycling infrastructure which has varying degrees of separation from motorized traffic, or which has excluded pedestrian traffic in the case of exclusive bike paths. There is no single usage of segregation; in some cases it can mean the exclusion of motor vehicles and in other cases the exclusion of pedestrians as well. Thus, it includes bike lanes with solid painted lines but not lanes with dotted lines and advisory bike lanes where motor vehicles are allowed to encroach on the lane. It includes cycle tracks as physically distinct from the roadway and sidewalk (e.g. barriers, parking or bollards). And it includes bike paths in their own right of way exclusive to cycling. Paths which are shared with pedestrians and other non-motorized traffic are not considered segregated and are typically called shared use path, multi-use path in North America and shared-use footway in the UK. Safety On major roads, segregated cycle tracks lead to safety improvements compared with cycling in traffic. There are concerns over the safety of cycle tracks and lanes at junctions due to collisions between turning motorists and cyclists, particularly where cycle tracks are two-way. The safety of cycle tracks at junctions can be improved with designs such as cycle path deflection (between 2m and 5m) and protected intersections. At multi-lane roundabouts, safety for cyclists is compromised. The installation of separated cycle tracks has been shown to improve safety at roundabouts. A Cochrane review of published evidence found that there was limited evidence to conclude whether cycling infrastructure improves cyclist safety. Legislation Different countries have different ways to legally define and enforce bikeways. Bikeway controversies Some detractors argue that one must be careful in interpreting the operation of dedicated or segregated bikeways/cycle facilities across different designs and contexts; what works for the Netherlands will not necessarily work elsewhere, or claiming that bikeways increase urban air pollution. Other transportation planners consider an incremental, piecemeal approach to bike infrastructure buildout ineffective and advocate for complete networks to be built in a single phase. Proponents point out that cycling infrastructure including dedicated bike lanes has been implemented in many cities; when well-designed and well-implemented they are popular and safe, and they are effective at relieving both congestion and air pollution. Bikeway selection Jurisdictions have guidelines around the selection of the right bikeway treatments in order make routes more comfortable and safer for cycling. A study reviewing the safety of "road diets" (motor traffic lane restrictions) for bike lanes found in summary that crash frequencies at road diets in the period after installation were 6% lower, road diets do not affect crash severity, or result in a significant change in crash types. This research was conducted by looking at areas scheduled for conversion before and after the road diet was performed. While also comparing similar areas that had not received any changes. It is noted that further research is recommended to confirm findings. Bikeway types Bikeways can fall into these main categories: separated in-roadway bikeways such as bike lanes and buffered bike lanes; physically separated in-roadway bikeways such as cycle tracks; right-of-way paths such as bike paths and shared use paths; and shared in-roadway bikeways such as bike boulevards, shared lane markings, and advisory bike lanes. The exact categorization changes depending on the jurisdiction and organization, while many just list the types by their commonly used names Dedicated bikeways Sharing with motor traffic Cyclists are legally allowed to travel on many roadways in accordance with the rules of the road for drivers of vehicles. A bicycle boulevard or cycle street is a low speed street which has been optimized for bicycle traffic. Bicycle boulevards discourage cut-through motor vehicle traffic but allow local motor vehicle traffic. They are designed to give priority to cyclists as through-going traffic. A shared lane marking, also known as a sharrow is a street marking that indicates the preferred lateral position for cyclists (to avoid the door zone and other obstacles) where dedicated bike lanes are not available. A 2-1 road is a roadway striping configuration which provides for two-way motor vehicle and bicycle traffic using a central vehicular travel lane and "advisory" bike lanes on either side. The center lane is dedicated to, and shared by, motorists traveling in both directions. The center lane is narrower than two vehicular travel lanes and has no centerline; some are narrower than the width of a car. Cyclists are given preference in the bike lanes but motorists can encroach into the bike lanes to pass other motor vehicles after yielding to cyclists. Advisory bike lanes are normally installed on low volume streets. Advisory bike lanes have a number of names. The U.S. Federal Highway Administration calls them "Advisory Shoulders". In New Zealand, they are called 2-minus-1 roads. They are called Schutzstreifen (Germany), Suggestiestrook (Netherlands), and Suggestion Lanes (a literal English translation of Suggestiestrook). Bicycle highways Denmark and the Netherlands have pioneered the concept of "bicycle superhighways". The first Dutch route opened in 2004 between Breda and Etten-Leur; many others have been added since then. In 2017 several bicycle superhighways were opened in the Arnhem-Nijmegen region, with the RijnWaalpad as the best example of this new type of cycling infrastructure. The first Danish route, C99, opened in 2012 between the Vesterbro rail station in Copenhagen and Albertslund, a western suburb. The route cost 13.4 million Danish kroner and is 17.5 km long, built with few stops and new paths away from traffic. "Service stations" with air pumps are located at regular intervals, and where the route must cross streets, handholds and running boards are provided so cyclists can wait without having to put their feet on the ground. Similar projects have since been built in Germany among other countries. The cost of building a bicycle super highway depends on many things, but is usually between €300,000/km (for a wide dedicated cycle track) and €800,000/km (when complex civil engineering structures are needed). Cycling-friendly streetscape modifications There are various measures cities and regions often take on the roadway to make it more cycling friendly and safer. Aspects of infrastructure may be viewed as either cyclist-hostile or as cyclist-friendly. However, scientific research indicates that different groups of cyclists show varying preferences of which aspects of cycling infrastructure are most relevant when choosing a specific cycling route over another. Measures to encourage cycling include traffic calming; traffic reduction; junction treatment; traffic control systems to recognize cyclists and give them priority; exempt cyclists from banned turns and access restrictions; implement contra-flow cycle lanes on one-way streets; implement on-street parking restrictions; provide advanced stop lines/bypasses for cyclists at traffic signals; marking wide curb/kerb lanes; and marking shared bus/cycle lanes. Colombian city, Bogota converted some car lanes into bidirectional bike lanes during coronavirus pandemic, adding 84 km of new bike lanes; the government is intending to make these new bike lanes permanent. In the US, slow-street movements have been introduced by erecting makeshift barriers to slow traffic and allow bikers and walkers to safely share the road with motorists. Traffic reduction Removing traffic can be achieved by straightforward diversion or alternatively reduction. Diversion involves routing through-traffic away from roads used by high numbers of cyclists and pedestrians. Examples of diversion include the construction of arterial bypasses and ring roads around urban centers. Indirect methods involve reducing the infrastructural capacity dedicated to moving motorized vehicles. This can involve reducing the number of road lanes, closing bridges to certain vehicle types and creating vehicle restricted zones or environmental traffic cells. In the 1970s the Dutch city of Delft began restricting private car traffic from crossing the city center. Similarly, Groningen is divided into four zones that cannot be crossed by private motor-traffic, (private cars must use the ring road instead). Cyclists and other traffic can pass between the zones and cycling accounts for 50%+ of trips in Groningen (which reputedly has the third-highest proportion of cycle traffic of any city). The Swedish city of Gothenburg uses a similar system of traffic cells. Another approach is to reduce the capacity to park cars. Starting in the 1970s, the city of Copenhagen, where now 36% of the trips are done by bicycle, adopted a policy of reducing available car parking capacity by several per cents per year. The city of Amsterdam, where around 40% of all trips are by bicycle, adopted similar parking reduction policies in the 80s and 90s. Direct traffic reduction methods can involve straightforward bans or more subtle methods like road pricing schemes or road diets. The London congestion charge reportedly resulted in a significant increase in cycle use within the affected area. Traffic calming Speed reduction has traditionally been attempted by statutory speed limits and enforcing the assured clear distance ahead rule. Recent implementations of shared space schemes have delivered significant traffic speed reductions. The reductions are sustainable, without the need for speed limits or speed limit enforcement. In Norrköping, Sweden, mean traffic speeds in 2006 dropped from 21 to 16 km/h (13 to 10 mph) since the implementation of such a scheme. Even without shared street implementation, creating 30 km/h zones (or 20 mph zone) has been shown to reduce crash rates and increase numbers of cyclists and pedestrians. Other studies have revealed that lower speeds reduce community severance caused by high speed roads. Research has shown that there is more neighborhood interaction and community cohesion when speeds are reduced to 20 mph. One-way streets German research indicates that making one-way streets two-way for cyclists results in a reduction in the total number of collisions. In Belgium, all one-way streets in 50 km/h zones are by default two-way for cyclists. A Danish road directorate states that in town centers it is important to be able to cycle in both directions in all streets, and that in certain circumstances, two-way cycle traffic can be accommodated in an otherwise one-way street. Two-way cycling on one-way streets One-way street systems can be viewed as either a product of traffic management that focuses on trying to keep motorized vehicles moving regardless of the social and other impacts, such as by some cycling campaigners, or seen as a useful tool for traffic calming, and for eliminating rat runs, in the view of UK traffic planners. One-way streets can disadvantage cyclists by increasing trip-length, delays and hazards associated with weaving maneuvers at junctions. In northern European countries such as the Netherlands, however, cyclists are frequently granted exemptions from one-way street restrictions, which improves cycling traffic flow while restricting motorized vehicles. German research indicates that making one-way streets two-way for cyclists results in a reduction in the total number of collisions. There are often restrictions to what one-way streets are good candidates for allowing two-way cycling traffic. In Belgium road authorities in principle allow any one-way street in zones to be two-way for cyclists if the available lane is at least wide (area free from parking) and no specific local circumstances prevent it. Denmark, a country with high cycling levels, does not use one-way systems to improve traffic flow. Some commentators argue that the initial goal should be to dismantle large one-way street systems as a traffic calming/traffic reduction measure, followed by the provision of two-way cyclist access on any one-way streets that remain. Intersection and junction design In general, junction designs that favor higher-speed turning, weaving and merging movements by motorists tend to be hostile for cyclists. Free-flowing arrangements can be hazardous for cyclists and should be avoided. Features such as large entry curvature, slip-roads and high flow roundabouts are associated with increased risk of car–cyclist collisions. Cycling advocates argue for modifications and alternative junction types that resolve these issues such as reducing kerb radii on street corners, eliminating slip roads and replacing large roundabouts with signalized intersections. Protected intersection Another approach which the Netherlands innovated is called in North America a protected intersection that reconfigures intersections to reduce risk to cyclists as they cross or turn. Some American cities are starting to pilot protected intersections. Bike box A bike box or an advanced stop line is a designated area at the head of a traffic lane at a signalized intersection that provides bicyclists with a safer and more visible way to get ahead of queuing traffic during the red signal phase. Roundabouts On large roundabouts of the design typically used in the UK and Ireland, cyclists have an injury accident rate that is 14–16 times that of motorists. Research indicates that excessive sightlines at uncontrolled intersections compound these effects. In the UK, a survey of over 8,000 highly experienced and mainly adult male Cyclists Touring Club members found that 28% avoided roundabouts on their regular journey if at all possible. The Dutch CROW guidelines recommend roundabouts only for intersections with motorized traffic up to 1500 per hour. To accommodate greater volumes of traffic, they recommend traffic light intersections or grade separation for cyclists. Examples of grade separation for cyclists include tunnels, or more spectacularly, raised "floating" roundabouts for cyclists. Traffic signals/Traffic control systems How traffic signals are designed and implemented directly impacts cyclists. For instance, poorly adjusted vehicle detector systems, used to trigger signal changes, may not correctly detect cyclists. This can leave cyclists in the position of having to "run" red lights if no motorized vehicle arrives to trigger a signal change. Some cities use urban adaptive traffic control systems (UTCs), which use linked traffic signals to manage traffic in response to changes in demand. There is an argument that using a UTC system merely to provide for increased capacity for motor traffic will simply drive growth in such traffic. However, there are more direct negative impacts. For instance, where signals are arranged to provide motor traffic with so-called green waves, this can create "red waves" for other road users such as cyclists and public transport services. Traffic managers in Copenhagen have now turned this approach on its head and are linking cyclist-specific traffic signals on a major arterial bike lane to provide green waves for rush hour cycle-traffic. However, this would still not resolve the problem of red-waves for slow (old and young) and fast (above average fitness) cyclists. Cycling-specific measures that can be applied at traffic signals include the use of advanced stop lines and/or bypasses. In some cases cyclists might be given a free-turn or a signal bypass if turning into a road on the nearside. Signposting In many places worldwide special signposts for bicycles are used to indicate directions and distances to destinations for cyclists. Apart from signposting in and between urban areas, mountain pass cycling milestones have become an important service for bicycle tourists. They provide cyclists with information about their current position with regard to the summit of the mountain pass. Numbered-node cycle networks are increasingly used in Europe to give flexible, low-cost signage. Widening outside lanes One method for reducing potential friction between cyclists and motorized vehicles is to provide "wide kerb", or "nearside", lanes (UK terminology) or "wide outside through lane" (U.S. terminology). These extra-wide lanes increase the probability that motorists pass cyclists at a safe distance without having to change lanes. This is held to be particularly important on routes with a high proportion of wide vehicles such as buses or heavy goods vehicles (HGVs). They also provide more room for cyclists to filter past queues of cars in congested conditions and to safely overtake each other. Due to the tendency of all vehicle users to stay in the center of their lane, it would be necessary to sub-divide the cycle lane with a broken white line to facilitate safe overtaking. Overtaking is indispensable for cyclists, as speeds are not dependent on the legal speed limit, but on the rider's capability. The use of such lanes is specifically endorsed by Cycling: the way ahead for towns and cities, the European Commission policy document on cycle promotion. Shared space Shared space schemes extend this principle further by removing the reliance on lane markings altogether, and also removing road signs and signals, allowing all road users to use any part of the road, and giving all road users equal priority and equal responsibility for each other's safety. Experiences where these schemes are in use show that road users, particularly motorists, undirected by signs, kerbs, or road markings, reduce their speed and establish eye contact with other users. Results from the thousands of such implementations worldwide all show casualty reductions and most also show reduced journey times. After the partial conversion of London's Kensington High Street to shared space, accidents decreased by 44% (the London average was 17%). However, in July 2018, the UK 'paused' all further shared space schemes over fears that a scheme dependent on eye-contact between drivers and pedestrians was unavoidably dangerous to pedestrians with visual impairments. CFI argues for a marked lane width of . On undivided roads, width provides cyclists with adequate clearance from passing HGVs while being narrow enough to deter drivers from "doubling up" to form two lanes. This "doubling up" effect may be related to junctions. At non-junction locations, greater width might be preferable if this effect can be avoided. The European Commission specifically endorses wide lanes in its policy document on cycling promotion, Cycling: the way ahead for towns and cities. Shared bus and cycle lanes Shared bus and cycle lanes are also a method for providing a more comfortable and safer space for cyclists. Depending on the width of the lane, the speeds and number of buses, and other local factors, the safety and popularity of this arrangement vary. In the Netherlands mixed bus/cycle lanes are uncommon. According to the Sustainable Safety guidelines they would violate the principle of homogeneity and put road users of very different masses and speed behavior into the same lane, which is generally discouraged. Road surface Bicycle tires being narrow, road surface is more important than for other transport, for both comfort and safety. The type and placement of storm drains, manholes, surface markings, and the general road surface quality should all be taken into account by a bicycle transportation engineer. Drain grates, for example, must not catch wheels. Trip-end facilities Bicycle parking/storage arrangements As secure and convenient bicycle parking is a key factor in influencing a person's decision to cycle, decent parking infrastructure must be provided to encourage the uptake of cycling. Decent bicycle parking involves weather-proof infrastructure such as lockers, stands, staffed or unstaffed bicycle parks, as well as bike parking facilities within workplaces to facilitate bicycle commuting. It also will help if certain legal arrangements are put into place to enable legitimate ad hoc parking, for example to allow people to lock their bicycles to railings, signs and other street furniture when individual proper bike stands are unavailable. Other trip end facilities Some people need to wear special clothes such as business suits or uniforms in their daily work. In some cases the nature of the cycling infrastructure and the prevailing weather conditions may make it very hard to both cycle and maintain the work clothes in a presentable condition. It is argued that such workers can be encouraged to cycle by providing lockers, changing rooms and shower facilities where they can change before starting work. Theft reduction measures The theft of bicycles is one of the major problems that slow the development of urban cycling. Bicycle theft discourages regular cyclists from buying new bicycles, as well as putting off people who might want to invest in a bicycle. Several measures can help reduce bicycle theft: Bicycle parking stations - buildings or structures designed for use as bicycle parking facilities, primarily for bicycle security Bicycle registration to enable recovery if stolen Danish bicycle VIN-system, a law requiring all bicycles in Denmark to have a vehicle identification number (VIN) with the bike's manufacturer code, a serial number, and a construction year code Making cyclists aware of antitheft devices and their effective use Mounting sting operations to catch thieves Secure bicycle parking: offering safe bicycle parking facilities such as guarded bicycle parking (staffed or with camera surveillance) or bicycle lockers Promoting devices to enable remote tracking of a bicycle's location Targeting cycle thieves Using folding bicycles which can be safely stored (for example) in cloakrooms or under desks. Certain European countries apply such measures with success, such as the Netherlands or certain German cities using registration and recovery. Since mid-2004, France has instituted a system of registration, in some places allowing stolen bicycles to be put on file in partnership with the urban cyclists' associations. This approach has reputedly increased the stolen bicycle recovery rate to more than 40%. By comparison, before the commencement of registration, the recovery rate in France was about 2%. In some areas of the United Kingdom, bicycles fitted with location tracking devices are left poorly secured in theft hot-spots. When the bike is stolen, the police can locate it and arrest the thieves. This sometimes leads to the dismantling of organized bicycle theft rings, as bike theft generally enjoys a very low priority with the police. Bicycle lift Bicycle lifts are used to haul bikes up stairs and steep hills. They are used to improve accessibility and encourage casual cycling. Bike escalators are widely used in East Asia and are used in parts of Europe. Impact According to a 2019 study, protected and separated bike infrastructure is associated with greater safety outcomes for all road users. A 2021 review of existing research found that closing car lanes and replacing them with bike lanes or pedestrian lanes had positive or non-significant economic effects. A 2021 case-control study of cities found that redistributing street space for cycling infrastructure—for so-called "pop-up bike lanes" during the COVID-19 pandemic—lead to large additional increases in cycling. These may have substantial environmental and health benefits which contemporary decision-makers have pledged to genuinely strive for with set goals such as CO2 emissions reductions of 55% by 2030 by the EU, climate change mitigation responsibilities of the Paris Agreement and EU air quality rules. Integration with public transit Cycling is often integrated with other transport. For example, in the Netherlands and Denmark a large number of train journeys may start by bicycle. In 1991, 44% of Dutch train travelers went to their local station by bicycle and 14% used a bicycle at their destinations. The key ingredients for this are claimed to be: an efficient, attractive and affordable train service secure bike parking at train stations a quick and easy bicycle rental system for commuters, the OV-bicycle scheme, at train stations a town planning policy that results in a sufficient proportion of the potential commuter population (e.g. 44%) living/working within a reasonable cycling distance of the train stations. It has been argued in relation to this aspect of Dutch or Danish policy that ongoing investment in rail services is vital to maintaining their levels of cycle use. Cycling and public transport are well integrated in Japan. Starting in 1978, Japan expanded bicycle parking supply at railway stations from 598,000 spaces in 1977 to 2,382,000 spaces in 1987. As of 1987, Japanese provisions included 516 multi-story garages for bicycle parking. In some cities, bicycles may be carried on local trains, trams and buses so that they may be used at either end of the trip. The Rheinbahn transit company in Düsseldorf permits bicycle carriage on all its bus, tram and train services at any time of the day. In Munich bicycles are allowed on the S-Bahn commuter trains outside of rush hours, and folding bikes are allowed on city busses. In Copenhagen, you can take your bicycle with you in the S-tog commuter trains, all times a day with no additional costs. In France, the prestigious TGV high-speed trains are even having some of their first class capacity converted to store bicycles. There have also been schemes, such as in Victoria, British Columbia, Acadia, and Canberra, Australia, to provide bicycle carriage on buses using externally mounted bike carriers. In some Canadian cities, including Edmonton, Alberta, and Toronto, Ontario, busses on most city routes have externally mounted carriers for bicycles, and bikes are allowed on the light rail trains at no extra cost outside of rush hour. All public transit buses in Chicago and suburbs allow up to two bikes at all times. The same is true of Grand River Transit buses in the Region of Waterloo, Ontario, Canada. Trains allow bikes with some restrictions. Where such services are not available, some cyclists get around this restriction by removing their pedals and loosening their handlebars as to fit into a box or by using folding bikes that can be brought onto the train or bus like a piece of luggage. The article on buses in Christchurch, New Zealand, lists 27 routes with bike racks. In the EU regional train services must carry bikes, and from 2025 new and major upgraded trains are generally required to have space for at least 4 non-folding bikes; however international services with countries outside the EU are exempt from these rules. In 2023 Eurostar cycle booking was described as “farcical”. Nevertheless EU train operators are sometimes allowed to restrict bikes, for example on old rolling stock or during peak hours. UK provision for bikes on trains varies considerably, with some train operating companies being criticised, for example for only providing vertical storage, which can be difficult or impossible to use. A UK Department for Transport 2021 white paper said “Bringing a bike on board makes a train journey even more convenient, yet even as cycling has grown in popularity, the railways have reduced space available for bikes on trains. Great British Railways will reverse that, increasing space on existing trains wherever practically possible, including on popular leisure routes.” A DoT train specification document issued in 2012 says “ Provision must be made for an excess luggage storage area which, as a minimum, is capable of accommodating two bicycles or luggage up to a minimum total volume of 2m3” with a bicycle being defined as a “Full size ‘road’ bicycle with 25inch frame”. some UK train companies severely limit bikes, for example GWR does not guarantee storage for bikes which have wheels with a rim diameter more than 50cm, which most bicycles do. Bikesharing systems A bicycle sharing system, public bicycle system, or bike share scheme, is a service in which bicycles are made available for shared use to individuals on a very short-term basis. Bike share schemes allow people to borrow a bike from point "A" and return it at point "B". Many of the bicycle sharing systems are on a subscription basis. Examples of cycling infrastructure
Technology
Road infrastructure
null
21866506
https://en.wikipedia.org/wiki/Thermochronology
Thermochronology
Thermochronology is the study of the thermal evolution of a region of a planet. Thermochronologists use radiometric dating along with the closure temperatures that represent the temperature of the mineral being studied at the time given by the date recorded to understand the thermal history of a specific rock, mineral, or geologic unit. It is a subfield within geology, and is closely associated with geochronology. A typical thermochronological study will involve the dates of a number of rock samples from different areas in a region, often from a vertical transect along a steep canyon, cliff face, or slope. These samples are then dated. With some knowledge of the subsurface thermal structure, these dates are translated into depths and times at which that particular sample was at the mineral's closure temperature. If the rock is today at the surface, this process gives the exhumation rate of the rock. Common isotopic systems used for thermochronology include fission track dating in zircon, apatite, titanite, natural glasses, and other uranium-rich mineral grains. Others include potassium-argon and argon-argon dating in apatite, and (U-Th)/He dating zircon and apatite. Radiometric Dating Radiometric dating is how geologist determine the age of a rock. In a closed system, the amount of radiogenic isotopes present in a sample is a direct function of time and the decay rate of the mineral. Therefore, to find the age of a sample, geologists find the ratio of daughter isotopes to remaining parent isotopes present in the mineral through different methods, such as mass spectrometry. From the known parent isotopes and the decay constant, we can then determine the age. Different ions can be analyzed for this and are called different dating. For thermochronology, the ages associated with these isotopic ratios is directly linked with the sample's thermal history. At high temperatures, the rocks will behave as if they are in an open system, which relates to the increased rate of diffusion of the daughter isotopes out of the mineral. At low temperatures, however, the rocks will behave as a closed system, meaning that all the products of decay are still found within the original host rock, and therefore more accurate to date. The same mineral can switch between these two systems of behavior, but not instantaneously. In order to switch over, the rock must first reach its closure temperature. Closure temperature is specific for each mineral and can be very useful if multiple minerals are found in a sample. This temperature is dependent on several assumptions, including: grain size and shape, a constant cooling rate, and chemical composition. Types of Dating associated with Thermochronology Fission Track Dating Fission track dating is the method used in thermochronology to find the approximate age of several uranium-rich minerals, such as apatite. When nuclear fission of uranium-238 (238U) happens in inorganic materials, damage tracks are created. These are due to a fast charged particle, released from the decay of Uranium, creating a thin trail of damage along its trajectory through the solid. To better study the fission tracks created, the natural damage tracks are further enlarged by chemical etching so they can be viewed under ordinary optical microscopes. The age of the mineral is then determined by first knowing the spontaneous rate of fission decay, and then measuring the number of tracks accumulated over the mineral's lifetime as well as estimating the amount of Uranium still present. At higher temperatures, fission tracks are known to anneal. Therefore, exact dating of samples is very hard. Absolute age can only be determined if the sample has cooled rapidly and remain undisturbed at or close to the surface. The environmental conditions, such as pressure and temperature, and their effects on the fission track on the atomic level still remains unclear. However, the stability of the fission tracks can generally be narrowed down to temperature and time. Approximate ages of minerals still reflect aspects of the thermal history of the sample, such as uplift and denudation. Potassium-Argon/Argon-Argon Dating Potassium-Argon/Argon-Argon dating is applied in thermochronology in order to find the age of the minerals, such as apatite. Potassium-argon (K-Ar) dating is concerned with determining the amount of the product of radioactive decay of isotopic potassium (40K) into its decay product of isotopic argon (40Ar). Because the 40Ar is able to escape in liquids, such as molten rock, but accumulates when the rock solidifies, or recrystallizes, geologists are able to measure the time since recrystallization by looking at the ratio of the amount of 40Ar that has accumulated to the 40K remaining. The age can be found by knowing the half-life of potassium. Argon-argon dating uses the ratio of 40Ar to 39Ar as a proxy for 40K to find the date of a sample. This method has been adopted because it only requires one measurement of an isotope. To do this, the nucleus of the argon isotope needs to be irradiated from a nuclear reactor in order to convert the stable isotope 39K to radioactive 40Ar. In order to measure the age of the rock, you have to repeat this process in a sample of known age in order to compare the ratios. (U-Th)/He Dating (U-Th)/He dating is used to measure the age of a sample by measuring the amount of radiogenic helium (4He) present as a result of the alpha decay from uranium and thorium. This helium product is kept in the mineral until the closure temperature is reached, and therefore can be determinant of the thermal evolution of the mineral. As in fission track dating, the exact age of the sample is difficult to determine. If the temperature goes above the closure temperature the product of decay, helium, diffuses to the atmosphere and the dating then resets. Applications By determining the relative date and temperature of a sample being studied, geologists are able to understand the structural information of the deposits. Thermochronology is used in a wide variety of subjects today, such as tectonic studies, exhumation of mountain belts, hydrothermal ore deposits, and even meteorites. Understanding the thermal history of an area, such as its exhumation rate, crystallization duration, and more, can be applicable in a wide variety of fields and help understand the history of earth and its thermal evolution.
Physical sciences
Geochronology
Earth science
21869208
https://en.wikipedia.org/wiki/Mucoromycotina
Mucoromycotina
Mucoromycotina is a subphylum of uncertain placement in Fungi. It was considered part of the phylum Zygomycota, but recent phylogenetic studies have shown that it was polyphyletic and thus split into several groups, it is now thought to be a paraphyletic grouping. Mucoromycotina is currently composed of 3 orders, 61 genera, and 325 species. Some common characteristics seen throughout the species include: development of coenocytic mycelium, saprotrophic lifestyles, and filamentous. History Zygomycete fungi were originally only ascribed to the phylum Zygomycota. Such classifications were based on physiological characteristics with little genetic support. A genetic study of Zygomycete fungi performed in 2016 showed that further classification of the group was possible, thus splitting it into Zoopagomycota, Entomophthoromycota, Kickxellomycotina, and Mucoromycotina. The study put these groups as being sister to Dikarya, but without further research, their exact locations in Fungi remain unknown. Many of the questions regarding these groups stem from the difficulty of collecting and growing them in culture, so the current groupings are based on the few that have been successfully collected and which could undergo genomic testing with a certain level of accuracy. Taxonomy The exact placement of Mucoromycotina is currently unknown. It currently resides in the subphyla incertae sedis, alongside Zoopagomycota, Entomophthoromycota, and Kickxellomycotina, whose’ placements are also currently unknown.  These groups originally comprised Zygomycota alongside others that were assigned to Glomeromycota, which was elevated to phylum in 2001. These groups are sister to Dikarya, which contains Ascomycota and Basidiomycota. Studies have currently divided Mucoromycotina into 3 orders: Endognales, Mucorales, and Mortierellales. All three orders contain species that are saprotrophic, with others forming relationships with other organisms. There are still many questions regarding Mucoromycotina and the organisms that compose it, owing to limited collected samples. Orders Endogonales This order currently contains 2 families, (Endogonaceae and Densosporaceae) 7 genera, and 40 species. Not much is known about this order, other than readily noticeable characteristics. They produce subterranean sporocarps, which are ingested by small mammals attracted by the fetid odor they produce. Cultured specimens have shown that they produce coenocytic mycelium, and can be saprotrophic or mycorrhizal. This order was first described in 1931 by Jacz. & P.A.Jacz., after being monographed in 1922 by Thaxter. Mucorales Often referred to as pin molds, members of this order produce sporangia held up on hyphae, called sporangiophores. There are currently 13 families in this order, divided into 56 genera, and approximately 300 species.  They can be parasitic or saprotrophic in nature and reproduce asexually. Much is known about this order since some of the species cause damage to stored food, with several others causing mycosis in immune compromised individuals. The order was proposed in 1878 by van Tieghem, as the examined samples did not fit in with what was Entomophthorales at the time. Mortierellales Previously considered a family of Mucorales, it was suggested as its own order in 1998. At the time it only contained 2 genera, one of which remains. What is known is that species in this order can be parasitic or saprotrophic in nature. Cultured specimens show that they produce a fine mycelium, with branched sporangia, and produce a garlic-like odor. They are widespread, showing up in soil samples from many different locations. The most studied genera in this order is Mortierella, which contains species that cause crown rot in strawberries. There are currently 6 families and 13 described genera, with more than 100 species. Mortierella polycephala was the first species described in 1863 by Coemans, and named after M. Du Mortier, the president of Société de Botanique de Belgique. Dissophora decumbens, the second, wasn't described until 1914, and the most recent was Lobosporangium transversal described in 2004. Ecology The species described in this subphylum have evolved 3 main lifestyles: saprotrophic, mycorrhizal, or parasitic. Saprotrophic species are involved in decomposition of organic matter, mycorrhizal species form symbiotic relationships with plants, and parasitic species form harmful symbiotic relationships with other organisms. Saprotrophs Saprotrophs breakdown decomposing matter into different components: proteins into amino acids, lipids into fatty acids and glycerol, and starches into disaccharides. The species responsible usually require excess water, oxygen, pH less than 7, and low temperatures. Parasitism Parasitic species seen in Mucorales and Mortierellales cause infections in crops and immune compromised animals. A common infection of plants by some species in Mucorales is referred to as crown rot or stem rot, common symptoms are: rotting near the soil line, rotting on one side or on lateral branches.  Treatment is difficult if not caught in its early stages, and usually results in the death of the plant. Crown rot is seen in cereal plants (wheat, barley), with experiments from 2015 showing crop losses at 0.01 t/ha per unit increase in crown rot index or more. In addition to cereal plants, crown rot is seen in strawberries and other such low growing plants. Mycorrhizal Mycorrhizal, literally “fungus-root”, interactions are symbioses between fungi and plants. Such interactions are based on nutrient acquisition and sharing, the fungi increases the range over which nutrients are gathered and the plant provides materials that the fungi cannot produce. There are two main types of interactions: arbuscular endomycorrhizal, and ectomycorrhizal.  Arbuscular endomycorrhizal interactions are when the fungi is allowed to enter the plant, and inhabit special cells. The fungi produce structures that look like trees, called “arbuscules,” inside these cells.  Ectomycorrhizal interactions are similar symbioses, however the fungi are not allowed into any plant cells, though they may grow between them. Plant-microbe interactions Endogonales A new genus proposed in 2017, Jimgerdemannia, contains species with an ectomycorrhizal trophic mode. Further research is needed to understand these species. Several studies have observed fossils of some potential members forming mycorrhizal interactions with ancient plants. The genus Endogone is important in nutrient-deficient soils, such as sand dunes. The presence of species in this genus stabilizes the soil and provides some assistance to dune plants. Mucorales Some species in the genus Mucor are well known for causing crown rot in cereal plants and damage to stored foods. Mortierellales The majority of the species in this group are saprotrophic, and thus form no known relationships with plants. They do however play a role in nutrient transfer through the breakdown of decaying organic matter. The few that are parasitic are only so for animals and not plants. Evolution A genome study of Rhizophagus irregularis performed in 2013 supported the hypothesis that Glomeromycota was responsible for early plant-fungi symbiotic relationships. A paper released in 2015 suggests that a Mucoromycotina species formed a symbiotic relationships with liverworts during the Paleozoic era, which may have been the first plant-fungi symbiotic relationship. Phylogenetic studies have been unable to place Mucoromycotina in any definitive location within fungi, however some research has suggested that the lineage is fairly old. Due to recent advancements allowing for better phylogenetic studies, species assigned to closely related groups are being reassigned to Mucoromycotina, one such species being Rhizophagus irregularis. Broader implications Phylogeny With the improvement of phylogenetic studies, the placement of several established groups in fungi have been called into question. There is some debate regarding the relationship between Mucoromycotina and Glomeromycota, with some species currently in Glomeromycota being moved to Mucoromycotina. Environment The genus Endogone in Endogonales, contains species that grow in sand dunes, aiding the plants that grow in the nutrient-poor soils. The mycelium that is formed also plays a role in soil stabilization, preventing erosion. Other species produce fruiting bodies that are included in the diets of various small rodent species. Species found in Mortierella of Mortierellales have roles in decomposition of organic matter. Some species are among the first to colonize new roots, and others have shared a relationship with spruce trees, though the exact nature in unknown. Disease Crown rot Crown rot is a plant disease caused by species in Mucorales. The disease is characterized by rotting tissue at or near where the stem meets the soil. Treatment is difficult if not caught in its early stages, and usually results in the death of the plant. Crown rot is seen in cereal plants (wheat, barley), with experiments from 2015 showing crop losses at 0.01 t/ha per unit increase in crown rot index or more. In addition to cereal plants, crown rot is seen in strawberries and other such low growing plants. Zygomycosis Fungal infection seen in animals with compromised immune systems, meaning the host is already sick before the fungus invades and inhabits the body. Also referred to as Mucoromycosis, depending on the species. Uses A study was conducted examining insecticidal properties of several fungi species, Mortierella was included. The study focused on species isolated from Antarctica, with the intention of identifying potentially useful adaptations. They found that the Mortierella species examined was shown to have some insecticidal properties against waxmoth and housefly larvae. Further research is needed to determine the process by which this is possible, and potential usefulness. Problems A recurring problem with study of this phylum, is the difficulty in culturing specimens. Many of the species identified and used in phylogenetic studies, or others, have been collected in the field with few of them being cultured in labs. Such an issue impacts the ability to produce extensive phylogenetic trees, resulting in the currently unknown location of the phylum in fungi.
Biology and health sciences
Basics
Plants
26257152
https://en.wikipedia.org/wiki/Steady-state%20model
Steady-state model
In cosmology, the steady-state model or steady state theory is an alternative to the Big Bang theory. In the steady-state model, the density of matter in the expanding universe remains unchanged due to a continuous creation of matter, thus adhering to the perfect cosmological principle, a principle that says that the observable universe is always the same at any time and any place. From the 1940s to the 1960s, the astrophysical community was divided between supporters of the Big Bang theory and supporters of the steady-state theory. The steady-state model is now rejected by most cosmologists, astrophysicists, and astronomers. The observational evidence points to a hot Big Bang cosmology with a finite age of the universe, which the steady-state model does not predict. History Cosmological expansion was originally seen through observations by Edwin Hubble. Theoretical calculations also showed that the static universe, as modeled by Albert Einstein (1917), was unstable. The modern Big Bang theory, first advanced by Father Georges Lemaître, is one in which the universe has a finite age and has evolved over time through cooling, expansion, and the formation of structures through gravitational collapse. On the other hand, the steady-state model says while the universe is expanding, it nevertheless does not change its appearance over time (the perfect cosmological principle). E.g., the universe has no beginning and no end. This required that matter be continually created in order to keep the universe's density from decreasing. Influential papers on the topic of a steady-state cosmology were published by Hermann Bondi, Thomas Gold, and Fred Hoyle in 1948. Similar models had been proposed earlier by William Duncan MacMillan, among others. It is now known that Albert Einstein considered a steady-state model of the expanding universe, as indicated in a 1931 manuscript, many years before Hoyle, Bondi and Gold. However, Einstein abandoned the idea. Observational tests Counts of radio sources Problems with the steady-state model began to emerge in the 1950s and 60s – observations supported the idea that the universe was in fact changing. Bright radio sources (quasars and radio galaxies) were found only at large distances (therefore could have existed only in the distant past due to the effects of the speed of light on astronomy), not in closer galaxies. Whereas the Big Bang theory predicted as much, the steady-state model predicted that such objects would be found throughout the universe, including close to our own galaxy. By 1961, statistical tests based on radio-source surveys provided strong evidence against the steady-state model. Some proponents like Halton Arp insist that the radio data were suspect. X-ray background Gold and Hoyle (1959) considered that matter that is newly created exists in a region that is denser than the average density of the universe. This matter then may radiate and cool faster than the surrounding regions, resulting in a pressure gradient. This gradient would push matter into an over-dense region and result in a thermal instability and emit a large amount of plasma. However, Gould and Burbidge (1963) realized that the thermal bremsstrahlung radiation emitted by such a plasma would exceed the amount of observed X-rays. Therefore, in the steady-state cosmological model, thermal instability does not appear to be important in the formation of galaxy-sized masses. Cosmic microwave background In 1964 the cosmic microwave background radiation was discovered as predicted by the Big Bang theory. The steady-state model attempted to explain the microwave background radiation as the result of light from ancient stars that has been scattered by galactic dust. However, the cosmic microwave background level is very even in all directions, making it difficult to explain how it could be generated by numerous point sources, and the microwave background radiation does show the polarization characteristic of scattering. Furthermore, its spectrum is so close to that of an ideal black body that it could hardly be formed by the superposition of contributions from a multitude of dust clumps at different temperatures as well as at different redshifts. Steven Weinberg wrote in 1972: Since this discovery, the Big Bang theory has been considered to provide the best explanation of the origin of the universe. In most astrophysical publications, the Big Bang is implicitly accepted and is used as the basis of more complete theories. Quasi-steady state Quasi-steady-state cosmology (QSS) was proposed in 1993 by Fred Hoyle, Geoffrey Burbidge, and Jayant V. Narlikar as a new incarnation of the steady-state ideas meant to explain additional features unaccounted for in the initial proposal. The model suggests pockets of creation occurring over time within the universe, sometimes referred to as minibangs, mini-creation events, or little bangs. After the observation of an accelerating universe, further modifications of the model were made. The Planck particle is a hypothetical black hole whose Schwarzschild radius is approximately the same as its Compton wavelength; the evaporation of such a particle has been evoked as the source of light elements in an expanding steady-state universe. Astrophysicist and cosmologist Ned Wright has pointed out flaws in the model. These first comments were soon rebutted by the proponents. Wright and other mainstream cosmologists reviewing QSS have pointed out new flaws and discrepancies with observations left unexplained by proponents.
Physical sciences
Physical cosmology
Astronomy
26262533
https://en.wikipedia.org/wiki/Drafting%20machine
Drafting machine
A drafting machine is a tool used in technical drawing, consisting of a pair of scales mounted to form a right angle on an articulated protractor head that allows an angular rotation. The protractor head (two scales and protractor mechanism) is able to move freely across the surface of the drawing board, sliding on two guides directly or indirectly anchored to the drawing board. These guides, which act separately, ensure the movement of the set in the horizontal or vertical direction of the drawing board, and can be locked independently of each other. The drafting machine was invented by Charles H. Little in 1901 (U.S. Patent No. 1,081,758), and he founded the Universal Drafting Machine Company in Cleveland, Ohio, to manufacture and sell the instrument. Drafting machines were present in the design offices of European companies since the 1920s. The Encyclopædia Britannica explicitly specifies 1930 as the year this tool was introduced, but an advertisement of "Memorie di architettura pratica" from 1913 places it twenty years before this date—at least in Italy. In the older design sets, the movement of the protractor head was assured by a pantograph system that could keep the head in the same angular position throughout its range of motion. The arms were balanced by a system of counterweights or springs. Typically, the machine is mounted on a drawing board with a hard and smooth surface, anchored to a base that allows its tilting and lifting. Thus, the realization of a drawing can be achieved in the most convenient way on a working surface that can be tilted at any angle from horizontal to vertical. There are special versions for A0 double-sized boards, to make large drawings, or copying-boards with background illumination, which have all that is necessary to provide specific support. With the drafting machine one can perform a series of drawing operations that otherwise could only be achieved with a much more complex use of the classic ruler square and protractor, as, for example, drawing parallel lines, orthogonal lines, inclined lines according to a preset angle, measurement of angles, etc. With the development of computer-aided design (CAD), the use of drafting machines, especially in the professional sector, has drastically declined, supplanted first by pen plotters, and then by large-format inkjet printers.
Technology
Artist's and drafting tools
null
23371726
https://en.wikipedia.org/wiki/Lagrangian%20mechanics
Lagrangian mechanics
In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle (also known as the principle of least action). It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique. Lagrangian mechanics describes a mechanical system as a pair consisting of a configuration space M and a smooth function within that space called a Lagrangian. For many systems, , where T and V are the kinetic and potential energy of the system, respectively. The stationary action principle requires that the action functional of the system derived from L must remain at a stationary point (a maximum, minimum, or saddle) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations. Introduction Newton's laws and the concept of forces are the usual starting point for teaching about mechanical systems. This method works well for many problems, but for others the approach is nightmarishly complicated. For example, in calculation of the motion of a torus rolling on a horizontal surface with a pearl sliding inside, the time-varying constraint forces like the angular velocity of the torus, motion of the pearl in relation to the torus made it difficult to determine the motion of the torus with Newton's equations. Lagrangian mechanics adopts energy rather than force as its basic ingredient, leading to more abstract equations capable of tackling more complex problems. Particularly, Lagrange's approach was to set up independent generalized coordinates for the position and speed of every object, which allows the writing down of a general form of Lagrangian (total kinetic energy minus potential energy of the system) and summing this over all possible paths of motion of the particles yielded a formula for the 'action', which he minimized to give a generalized set of equations. This summed quantity is minimized along the path that the particle actually takes. This choice eliminates the need for the constraint force to enter into the resultant generalized system of equations. There are fewer equations since one is not directly calculating the influence of the constraint on the particle at a given moment. For a wide variety of physical systems, if the size and shape of a massive object are negligible, it is a useful simplification to treat it as a point particle. For a system of N point particles with masses m1, m2, ..., mN, each particle has a position vector, denoted r1, r2, ..., rN. Cartesian coordinates are often sufficient, so , and so on. In three-dimensional space, each position vector requires three coordinates to uniquely define the location of a point, so there are 3N coordinates to uniquely define the configuration of the system. These are all specific points in space to locate the particles; a general point in space is written . The velocity of each particle is how fast the particle moves along its path of motion, and is the time derivative of its position, thus In Newtonian mechanics, the equations of motion are given by Newton's laws. The second law "net force equals mass times acceleration", applies to each particle. For an N-particle system in 3 dimensions, there are 3N second-order ordinary differential equations in the positions of the particles to solve for. Lagrangian Instead of forces, Lagrangian mechanics uses the energies in the system. The central quantity of Lagrangian mechanics is the Lagrangian, a function which summarizes the dynamics of the entire system. Overall, the Lagrangian has units of energy, but no single expression for all physical systems. Any function which generates the correct equations of motion, in agreement with physical laws, can be taken as a Lagrangian. It is nevertheless possible to construct general expressions for large classes of applications. The non-relativistic Lagrangian for a system of particles in the absence of an electromagnetic field is given by where is the total kinetic energy of the system, equaling the sum Σ of the kinetic energies of the particles. Each particle labeled has mass and is the magnitude squared of its velocity, equivalent to the dot product of the velocity with itself. Kinetic energy is the energy of the system's motion and is a function only of the velocities vk, not the positions rk, nor time t, so V, the potential energy of the system, reflects the energy of interaction between the particles, i.e. how much energy any one particle has due to all the others, together with any external influences. For conservative forces (e.g. Newtonian gravity), it is a function of the position vectors of the particles only, so For those non-conservative forces which can be derived from an appropriate potential (e.g. electromagnetic potential), the velocities will appear also, If there is some external field or external driving force changing with time, the potential changes with time, so most generally As already noted, this form of L is applicable to many important classes of system, but not everywhere. For relativistic Lagrangian mechanics it must be replaced as a whole by a function consistent with special relativity (scalar under Lorentz transformations) or general relativity (4-scalar). Where a magnetic field is present, the expression for the potential energy needs restating. And for dissipative forces (e.g., friction), another function must be introduced alongside Lagrangian often referred to as a "Rayleigh dissipation function" to account for the loss of energy. One or more of the particles may each be subject to one or more holonomic constraints; such a constraint is described by an equation of the form If the number of constraints in the system is C, then each constraint has an equation ..., each of which could apply to any of the particles. If particle k is subject to constraint i, then At any instant of time, the coordinates of a constrained particle are linked together and not independent. The constraint equations determine the allowed paths the particles can move along, but not where they are or how fast they go at every instant of time. Nonholonomic constraints depend on the particle velocities, accelerations, or higher derivatives of position. Lagrangian mechanics can only be applied to systems whose constraints, if any, are all holonomic. Three examples of nonholonomic constraints are: when the constraint equations are non-integrable, when the constraints have inequalities, or when the constraints involve complicated non-conservative forces like friction. Nonholonomic constraints require special treatment, and one may have to revert to Newtonian mechanics or use other methods. If T or V or both depend explicitly on time due to time-varying constraints or external influences, the Lagrangian is explicitly time-dependent. If neither the potential nor the kinetic energy depend on time, then the Lagrangian is explicitly independent of time. In either case, the Lagrangian always has implicit time dependence through the generalized coordinates. With these definitions, Lagrange's equations of the first kind are where k = 1, 2, ..., N labels the particles, there is a Lagrange multiplier λi for each constraint equation fi, and are each shorthands for a vector of partial derivatives with respect to the indicated variables (not a derivative with respect to the entire vector). Each overdot is a shorthand for a time derivative. This procedure does increase the number of equations to solve compared to Newton's laws, from 3N to , because there are 3N coupled second-order differential equations in the position coordinates and multipliers, plus C constraint equations. However, when solved alongside the position coordinates of the particles, the multipliers can yield information about the constraint forces. The coordinates do not need to be eliminated by solving the constraint equations. In the Lagrangian, the position coordinates and velocity components are all independent variables, and derivatives of the Lagrangian are taken with respect to these separately according to the usual differentiation rules (e.g. the partial derivative of L with respect to the z velocity component of particle 2, defined by , is just ; no awkward chain rules or total derivatives need to be used to relate the velocity component to the corresponding coordinate z2). In each constraint equation, one coordinate is redundant because it is determined from the other coordinates. The number of independent coordinates is therefore . We can transform each position vector to a common set of n generalized coordinates, conveniently written as an n-tuple , by expressing each position vector, and hence the position coordinates, as functions of the generalized coordinates and time: The vector q is a point in the configuration space of the system. The time derivatives of the generalized coordinates are called the generalized velocities, and for each particle the transformation of its velocity vector, the total derivative of its position with respect to time, is Given this vk, the kinetic energy in generalized coordinates depends on the generalized velocities, generalized coordinates, and time if the position vectors depend explicitly on time due to time-varying constraints, so With these definitions, the Euler–Lagrange equations, or Lagrange's equations of the second kind are mathematical results from the calculus of variations, which can also be used in mechanics. Substituting in the Lagrangian gives the equations of motion of the system. The number of equations has decreased compared to Newtonian mechanics, from 3N to coupled second-order differential equations in the generalized coordinates. These equations do not include constraint forces at all, only non-constraint forces need to be accounted for. Although the equations of motion include partial derivatives, the results of the partial derivatives are still ordinary differential equations in the position coordinates of the particles. The total time derivative denoted d/dt often involves implicit differentiation. Both equations are linear in the Lagrangian, but generally are nonlinear coupled equations in the coordinates. From Newtonian to Lagrangian mechanics Newton's laws For simplicity, Newton's laws can be illustrated for one particle without much loss of generality (for a system of N particles, all of these equations apply to each particle in the system). The equation of motion for a particle of constant mass m is Newton's second law of 1687, in modern vector notation where a is its acceleration and F the resultant force acting on it. Where the mass is varying, the equation needs to be generalised to take the time derivative of the momentum. In three spatial dimensions, this is a system of three coupled second-order ordinary differential equations to solve, since there are three components in this vector equation. The solution is the position vector r of the particle at time t, subject to the initial conditions of r and v when Newton's laws are easy to use in Cartesian coordinates, but Cartesian coordinates are not always convenient, and for other coordinate systems the equations of motion can become complicated. In a set of curvilinear coordinates the law in tensor index notation is the "Lagrangian form" where Fa is the a-th contravariant component of the resultant force acting on the particle, Γabc are the Christoffel symbols of the second kind, is the kinetic energy of the particle, and gbc the covariant components of the metric tensor of the curvilinear coordinate system. All the indices a, b, c, each take the values 1, 2, 3. Curvilinear coordinates are not the same as generalized coordinates. It may seem like an overcomplication to cast Newton's law in this form, but there are advantages. The acceleration components in terms of the Christoffel symbols can be avoided by evaluating derivatives of the kinetic energy instead. If there is no resultant force acting on the particle, it does not accelerate, but moves with constant velocity in a straight line. Mathematically, the solutions of the differential equation are geodesics, the curves of extremal length between two points in space (these may end up being minimal, that is the shortest paths, but not necessarily). In flat 3D real space the geodesics are simply straight lines. So for a free particle, Newton's second law coincides with the geodesic equation and states that free particles follow geodesics, the extremal trajectories it can move along. If the particle is subject to forces the particle accelerates due to forces acting on it and deviates away from the geodesics it would follow if free. With appropriate extensions of the quantities given here in flat 3D space to 4D curved spacetime, the above form of Newton's law also carries over to Einstein's general relativity, in which case free particles follow geodesics in curved spacetime that are no longer "straight lines" in the ordinary sense. However, we still need to know the total resultant force F acting on the particle, which in turn requires the resultant non-constraint force N plus the resultant constraint force C, The constraint forces can be complicated, since they generally depend on time. Also, if there are constraints, the curvilinear coordinates are not independent but related by one or more constraint equations. The constraint forces can either be eliminated from the equations of motion, so only the non-constraint forces remain, or included by including the constraint equations in the equations of motion. D'Alembert's principle A fundamental result in analytical mechanics is D'Alembert's principle, introduced in 1708 by Jacques Bernoulli to understand static equilibrium, and developed by D'Alembert in 1743 to solve dynamical problems. The principle asserts for N particles the virtual work, i.e. the work along a virtual displacement, δrk, is zero: The virtual displacements, δrk, are by definition infinitesimal changes in the configuration of the system consistent with the constraint forces acting on the system at an instant of time, i.e. in such a way that the constraint forces maintain the constrained motion. They are not the same as the actual displacements in the system, which are caused by the resultant constraint and non-constraint forces acting on the particle to accelerate and move it. Virtual work is the work done along a virtual displacement for any force (constraint or non-constraint). Since the constraint forces act perpendicular to the motion of each particle in the system to maintain the constraints, the total virtual work by the constraint forces acting on the system is zero: so that Thus D'Alembert's principle allows us to concentrate on only the applied non-constraint forces, and exclude the constraint forces in the equations of motion. The form shown is also independent of the choice of coordinates. However, it cannot be readily used to set up the equations of motion in an arbitrary coordinate system since the displacements δrk might be connected by a constraint equation, which prevents us from setting the N individual summands to 0. We will therefore seek a system of mutually independent coordinates for which the total sum will be 0 if and only if the individual summands are 0. Setting each of the summands to 0 will eventually give us our separated equations of motion. Equations of motion from D'Alembert's principle If there are constraints on particle k, then since the coordinates of the position are linked together by a constraint equation, so are those of the virtual displacements . Since the generalized coordinates are independent, we can avoid the complications with the δrk by converting to virtual displacements in the generalized coordinates. These are related in the same form as a total differential, There is no partial time derivative with respect to time multiplied by a time increment, since this is a virtual displacement, one along the constraints in an instant of time. The first term in D'Alembert's principle above is the virtual work done by the non-constraint forces Nk along the virtual displacements δrk, and can without loss of generality be converted into the generalized analogues by the definition of generalized forces so that This is half of the conversion to generalized coordinates. It remains to convert the acceleration term into generalized coordinates, which is not immediately obvious. Recalling the Lagrange form of Newton's second law, the partial derivatives of the kinetic energy with respect to the generalized coordinates and velocities can be found to give the desired result: Now D'Alembert's principle is in the generalized coordinates as required, and since these virtual displacements δqj are independent and nonzero, the coefficients can be equated to zero, resulting in Lagrange's equations or the generalized equations of motion, These equations are equivalent to Newton's laws for the non-constraint forces. The generalized forces in this equation are derived from the non-constraint forces only – the constraint forces have been excluded from D'Alembert's principle and do not need to be found. The generalized forces may be non-conservative, provided they satisfy D'Alembert's principle. Euler–Lagrange equations and Hamilton's principle For a non-conservative force which depends on velocity, it may be possible to find a potential energy function V that depends on positions and velocities. If the generalized forces Qi can be derived from a potential V such that equating to Lagrange's equations and defining the Lagrangian as obtains Lagrange's equations of the second kind or the Euler–Lagrange equations of motion However, the Euler–Lagrange equations can only account for non-conservative forces if a potential can be found as shown. This may not always be possible for non-conservative forces, and Lagrange's equations do not involve any potential, only generalized forces; therefore they are more general than the Euler–Lagrange equations. The Euler–Lagrange equations also follow from the calculus of variations. The variation of the Lagrangian is which has a form similar to the total differential of L, but the virtual displacements and their time derivatives replace differentials, and there is no time increment in accordance with the definition of the virtual displacements. An integration by parts with respect to time can transfer the time derivative of δqj to the ∂L/∂(dqj/dt), in the process exchanging d(δqj)/dt for δqj, allowing the independent virtual displacements to be factorized from the derivatives of the Lagrangian, Now, if the condition holds for all j, the terms not integrated are zero. If in addition the entire time integral of δL is zero, then because the δqj are independent, and the only way for a definite integral to be zero is if the integrand equals zero, each of the coefficients of δqj must also be zero. Then we obtain the equations of motion. This can be summarized by Hamilton's principle: The time integral of the Lagrangian is another quantity called the action, defined as which is a functional; it takes in the Lagrangian function for all times between t1 and t2 and returns a scalar value. Its dimensions are the same as , ·, or ·. With this definition Hamilton's principle is Instead of thinking about particles accelerating in response to applied forces, one might think of them picking out the path with a stationary action, with the end points of the path in configuration space held fixed at the initial and final times. Hamilton's principle is one of several action principles. Historically, the idea of finding the shortest path a particle can follow subject to a force motivated the first applications of the calculus of variations to mechanical problems, such as the Brachistochrone problem solved by Jean Bernoulli in 1696, as well as Leibniz, Daniel Bernoulli, L'Hôpital around the same time, and Newton the following year. Newton himself was thinking along the lines of the variational calculus, but did not publish. These ideas in turn lead to the variational principles of mechanics, of Fermat, Maupertuis, Euler, Hamilton, and others. Hamilton's principle can be applied to nonholonomic constraints if the constraint equations can be put into a certain form, a linear combination of first order differentials in the coordinates. The resulting constraint equation can be rearranged into first order differential equation. This will not be given here. Lagrange multipliers and constraints The Lagrangian L can be varied in the Cartesian rk coordinates, for N particles, Hamilton's principle is still valid even if the coordinates L is expressed in are not independent, here rk, but the constraints are still assumed to be holonomic. As always the end points are fixed for all k. What cannot be done is to simply equate the coefficients of δrk to zero because the δrk are not independent. Instead, the method of Lagrange multipliers can be used to include the constraints. Multiplying each constraint equation by a Lagrange multiplier λi for i = 1, 2, ..., C, and adding the results to the original Lagrangian, gives the new Lagrangian The Lagrange multipliers are arbitrary functions of time t, but not functions of the coordinates rk, so the multipliers are on equal footing with the position coordinates. Varying this new Lagrangian and integrating with respect to time gives The introduced multipliers can be found so that the coefficients of δrk are zero, even though the rk are not independent. The equations of motion follow. From the preceding analysis, obtaining the solution to this integral is equivalent to the statement which are Lagrange's equations of the first kind. Also, the λi Euler-Lagrange equations for the new Lagrangian return the constraint equations For the case of a conservative force given by the gradient of some potential energy V, a function of the rk coordinates only, substituting the Lagrangian gives and identifying the derivatives of kinetic energy as the (negative of the) resultant force, and the derivatives of the potential equaling the non-constraint force, it follows the constraint forces are thus giving the constraint forces explicitly in terms of the constraint equations and the Lagrange multipliers. Properties of the Lagrangian Non-uniqueness The Lagrangian of a given system is not unique. A Lagrangian L can be multiplied by a nonzero constant a and shifted by an arbitrary constant b, and the new Lagrangian will describe the same motion as L. If one restricts as above to trajectories q over a given time interval } and fixed end points and , then two Lagrangians describing the same system can differ by the "total time derivative" of a function : where means Both Lagrangians L and L′ produce the same equations of motion since the corresponding actions S and S′ are related via with the last two components and independent of q. Invariance under point transformations Given a set of generalized coordinates q, if we change these variables to a new set of generalized coordinates Q according to a point transformation which is invertible as , the new Lagrangian L′ is a function of the new coordinates and by the chain rule for partial differentiation, Lagrange's equations are invariant under this transformation; This may simplify the equations of motion. Cyclic coordinates and conserved momenta An important property of the Lagrangian is that conserved quantities can easily be read off from it. The generalized momentum "canonically conjugate to" the coordinate qi is defined by If the Lagrangian L does not depend on some coordinate qi, it follows immediately from the Euler–Lagrange equations that and integrating shows the corresponding generalized momentum equals a constant, a conserved quantity. This is a special case of Noether's theorem. Such coordinates are called "cyclic" or "ignorable". For example, a system may have a Lagrangian where r and z are lengths along straight lines, s is an arc length along some curve, and θ and φ are angles. Notice z, s, and φ are all absent in the Lagrangian even though their velocities are not. Then the momenta are all conserved quantities. The units and nature of each generalized momentum will depend on the corresponding coordinate; in this case pz is a translational momentum in the z direction, ps is also a translational momentum along the curve s is measured, and pφ is an angular momentum in the plane the angle φ is measured in. However complicated the motion of the system is, all the coordinates and velocities will vary in such a way that these momenta are conserved. Energy Given a Lagrangian the Hamiltonian of the corresponding mechanical system is, by definition, This quantity will be equivalent to energy if the generalized coordinates are natural coordinates, i.e., they have no explicit time dependence when expressing position vector: . From: where is a symmetric matrix that is defined for the derivation. Invariance under coordinate transformations At every time instant t, the energy is invariant under configuration space coordinate changes , i.e. (using natural coordinates) Besides this result, the proof below shows that, under such change of coordinates, the derivatives change as coefficients of a linear form. Conservation In Lagrangian mechanics, the system is closed if and only if its Lagrangian does not explicitly depend on time. The energy conservation law states that the energy of a closed system is an integral of motion. More precisely, let be an extremal. (In other words, satisfies the Euler–Lagrange equations). Taking the total time-derivative of L along this extremal and using the EL equations leads to If the Lagrangian L does not explicitly depend on time, then , then H does not vary with time evolution of particle, indeed, an integral of motion, meaning that Hence, if the chosen coordinates were natural coordinates, the energy is conserved. Kinetic and potential energies Under all these circumstances, the constant is the total energy of the system. The kinetic and potential energies still change as the system evolves, but the motion of the system will be such that their sum, the total energy, is constant. This is a valuable simplification, since the energy E is a constant of integration that counts as an arbitrary constant for the problem, and it may be possible to integrate the velocities from this energy relation to solve for the coordinates. Mechanical similarity If the potential energy is a homogeneous function of the coordinates and independent of time, and all position vectors are scaled by the same nonzero constant α, , so that and time is scaled by a factor β, t′ = βt, then the velocities vk are scaled by a factor of α/β and the kinetic energy T by (α/β)2. The entire Lagrangian has been scaled by the same factor if Since the lengths and times have been scaled, the trajectories of the particles in the system follow geometrically similar paths differing in size. The length l traversed in time t in the original trajectory corresponds to a new length l′ traversed in time t′ in the new trajectory, given by the ratios Interacting particles For a given system, if two subsystems A and B are non-interacting, the Lagrangian L of the overall system is the sum of the Lagrangians LA and LB for the subsystems: If they do interact this is not possible. In some situations, it may be possible to separate the Lagrangian of the system L into the sum of non-interacting Lagrangians, plus another Lagrangian LAB containing information about the interaction, This may be physically motivated by taking the non-interacting Lagrangians to be kinetic energies only, while the interaction Lagrangian is the system's total potential energy. Also, in the limiting case of negligible interaction, LAB tends to zero reducing to the non-interacting case above. The extension to more than two non-interacting subsystems is straightforward – the overall Lagrangian is the sum of the separate Lagrangians for each subsystem. If there are interactions, then interaction Lagrangians may be added. Consequences of singular Lagrangians From the Euler-Lagrange equations, it follows that: where the matrix is defined as . If the matrix is non-singular, the above equations can be solved to represent as a function of . If the matrix is non-invertible, it would not be possible to represent all 's as a function of but also, the Hamiltonian equations of motions will not take the standard form. Examples The following examples apply Lagrange's equations of the second kind to mechanical problems. Conservative force A particle of mass m moves under the influence of a conservative force derived from the gradient ∇ of a scalar potential, If there are more particles, in accordance with the above results, the total kinetic energy is a sum over all the particle kinetic energies, and the potential is a function of all the coordinates. Cartesian coordinates The Lagrangian of the particle can be written The equations of motion for the particle are found by applying the Euler–Lagrange equation, for the x coordinate with derivatives hence and similarly for the y and z coordinates. Collecting the equations in vector form we find which is Newton's second law of motion for a particle subject to a conservative force. Polar coordinates in 2D and 3D Using the spherical coordinates as commonly used in physics (ISO 80000-2:2019 convention), where r is the radial distance to origin, θ is polar angle (also known as colatitude, zenith angle, normal angle, or inclination angle), and φ is the azimuthal angle, the Lagrangian for a central potential is So, in spherical coordinates, the Euler–Lagrange equations are The φ coordinate is cyclic since it does not appear in the Lagrangian, so the conserved momentum in the system is the angular momentum in which r, θ and dφ/dt can all vary with time, but only in such a way that pφ is constant. The Lagrangian in two-dimensional polar coordinates is recovered by fixing θ to the constant value π/2. Pendulum on a movable support Consider a pendulum of mass m and length ℓ, which is attached to a support with mass M, which can move along a line in the -direction. Let be the coordinate along the line of the support, and let us denote the position of the pendulum by the angle from the vertical. The coordinates and velocity components of the pendulum bob are The generalized coordinates can be taken to be and . The kinetic energy of the system is then and the potential energy is giving the Lagrangian Since x is absent from the Lagrangian, it is a cyclic coordinate. The conserved momentum is and the Lagrange equation for the support coordinate is The Lagrange equation for the angle θ is and simplifying These equations may look quite complicated, but finding them with Newton's laws would have required carefully identifying all forces, which would have been much more laborious and prone to errors. By considering limit cases, the correctness of this system can be verified: For example, should give the equations of motion for a simple pendulum that is at rest in some inertial frame, while should give the equations for a pendulum in a constantly accelerating system, etc. Furthermore, it is trivial to obtain the results numerically, given suitable starting conditions and a chosen time step, by stepping through the results iteratively. Two-body central force problem Two bodies of masses and with position vectors and are in orbit about each other due to an attractive central potential . We may write down the Lagrangian in terms of the position coordinates as they are, but it is an established procedure to convert the two-body problem into a one-body problem as follows. Introduce the Jacobi coordinates; the separation of the bodies and the location of the center of mass . The Lagrangian is then where is the total mass, is the reduced mass, and the potential of the radial force, which depends only on the magnitude of the separation . The Lagrangian splits into a center-of-mass term and a relative motion term . The Euler–Lagrange equation for is simply which states the center of mass moves in a straight line at constant velocity. Since the relative motion only depends on the magnitude of the separation, it is ideal to use polar coordinates and take , so is a cyclic coordinate with the corresponding conserved (angular) momentum The radial coordinate and angular velocity can vary with time, but only in such a way that is constant. The Lagrange equation for is This equation is identical to the radial equation obtained using Newton's laws in a co-rotating reference frame, that is, a frame rotating with the reduced mass so it appears stationary. Eliminating the angular velocity from this radial equation, which is the equation of motion for a one-dimensional problem in which a particle of mass is subjected to the inward central force and a second outward force, called in this context the (Lagrangian) centrifugal force (see centrifugal force#Other uses of the term): Of course, if one remains entirely within the one-dimensional formulation, enters only as some imposed parameter of the external outward force, and its interpretation as angular momentum depends upon the more general two-dimensional problem from which the one-dimensional problem originated. If one arrives at this equation using Newtonian mechanics in a co-rotating frame, the interpretation is evident as the centrifugal force in that frame due to the rotation of the frame itself. If one arrives at this equation directly by using the generalized coordinates and simply following the Lagrangian formulation without thinking about frames at all, the interpretation is that the centrifugal force is an outgrowth of using polar coordinates. As Hildebrand says: "Since such quantities are not true physical forces, they are often called inertia forces. Their presence or absence depends, not upon the particular problem at hand, but upon the coordinate system chosen." In particular, if Cartesian coordinates are chosen, the centrifugal force disappears, and the formulation involves only the central force itself, which provides the centripetal force for a curved motion. This viewpoint, that fictitious forces originate in the choice of coordinates, often is expressed by users of the Lagrangian method. This view arises naturally in the Lagrangian approach, because the frame of reference is (possibly unconsciously) selected by the choice of coordinates. For example, see for a comparison of Lagrangians in an inertial and in a noninertial frame of reference.
Physical sciences
Classical mechanics
null
35038133
https://en.wikipedia.org/wiki/Pathogen
Pathogen
In biology, a pathogen (, "suffering", "passion" and , "producer of"), in the oldest and broadest sense, is any organism or agent that can produce disease. A pathogen may also be referred to as an infectious agent, or simply a germ. The term pathogen came into use in the 1880s. Typically, the term pathogen is used to describe an infectious microorganism or agent, such as a virus, bacterium, protozoan, prion, viroid, or fungus. Small animals, such as helminths and insects, can also cause or transmit disease. However, these animals are usually referred to as parasites rather than pathogens. The scientific study of microscopic organisms, including microscopic pathogenic organisms, is called microbiology, while parasitology refers to the scientific study of parasites and the organisms that host them. There are several pathways through which pathogens can invade a host. The principal pathways have different episodic time frames, but soil has the longest or most persistent potential for harboring a pathogen. Diseases in humans that are caused by infectious agents are known as pathogenic diseases. Not all diseases are caused by pathogens, such as black lung from exposure to the pollutant coal dust, genetic disorders like sickle cell disease, and autoimmune diseases like lupus. Pathogenicity Pathogenicity is the potential disease-causing capacity of pathogens, involving a combination of infectivity (pathogen's ability to infect hosts) and virulence (severity of host disease). Koch's postulates are used to establish causal relationships between microbial pathogens and diseases. Whereas meningitis can be caused by a variety of bacterial, viral, fungal, and parasitic pathogens, cholera is only caused by some strains of Vibrio cholerae. Additionally, some pathogens may only cause disease in hosts with an immunodeficiency. These opportunistic infections often involve hospital-acquired infections among patients already combating another condition. Infectivity involves pathogen transmission through direct contact with the bodily fluids or airborne droplets of infected hosts, indirect contact involving contaminated areas/items, or transfer by living vectors like mosquitos and ticks. The basic reproduction number of an infection is the expected number of subsequent cases it is likely to cause through transmission. Virulence involves pathogens extracting host nutrients for their survival, evading host immune systems by producing microbial toxins and causing immunosuppression. Optimal virulence describes a theorized equilibrium between a pathogen spreading to additional hosts to parasitize resources, while lowering their virulence to keep hosts living for vertical transmission to their offspring. Types Algae Algae include single-celled eukaryotes that are generally non-pathogenic. Green algae from the genus Prototheca lack chlorophyll and are known to cause the disease protothecosis in humans, dogs, cats, and cattle, typically involving the soil-associated species Prototheca wickerhamii. Bacteria Bacteria are single-celled prokaryotes that range in size from 0.15 and 700 μM. While the vast majority are either harmless or beneficial to their hosts, such as members of the human gut microbiome that support digestion, a small percentage are pathogenic and cause infectious diseases. Bacterial virulence factors include adherence factors to attach to host cells, invasion factors supporting entry into host cells, capsules to prevent opsonization and phagocytosis, toxins, and siderophores to acquire iron. The bacterial disease tuberculosis, primarily caused by Mycobacterium tuberculosis, has one of the highest disease burdens, killing 1.6 million people in 2021, mostly in Africa and Southeast Asia. Bacterial pneumonia is primarily caused by Streptococcus pneumoniae, Staphylococcus aureus, Klebsiella pneumoniae, and Haemophilus influenzae. Foodborne illnesses typically involve Campylobacter, Clostridium perfringens, Escherichia coli, Listeria monocytogenes, and Salmonella. Other infectious diseases caused by pathogenic bacteria include tetanus, typhoid fever, diphtheria, and leprosy. Fungi Fungi are eukaryotic organisms that can function as pathogens. There are approximately 300 known fungi that are pathogenic to humans, including Candida albicans, which is the most common cause of thrush, and Cryptococcus neoformans, which can cause a severe form of meningitis. Typical fungal spores are 4.7 μm long or smaller. Prions Prions are misfolded proteins that transmit their abnormal folding pattern to other copies of the protein without using nucleic acids. Besides obtaining prions from others, these misfolded proteins arise from genetic differences, either due to family history or sporadic mutations. Plants uptake prions from contaminated soil and transport them into their stem and leaves, potentially transmitting the prions to herbivorous animals. Additionally, wood, rocks, plastic, glass, cement, stainless steel, and aluminum have been shown binding, retaining, and releasing prions, showcasing that the proteins resist environmental degradation. Prions are best known for causing transmissible spongiform encephalopathy (TSE) diseases like Creutzfeldt–Jakob disease (CJD), variant Creutzfeldt–Jakob disease (vCJD), Gerstmann–Sträussler–Scheinker syndrome (GSS), fatal familial insomnia (FFI), and kuru in humans. While prions are typically viewed as pathogens that cause protein amyloid fibers to accumulate into neurodegenerative plaques, Susan Lindquist led research showing that yeast use prions to pass on evolutionarily beneficial traits. Viroids Not to be confused with virusoids or viruses, viroids are the smallest known infectious pathogens. Viroids are small single-stranded, circular RNA that are only known to cause plant diseases, such as the potato spindle tuber viroid that affects various agricultural crops. Viroid RNA is not protected by a protein coat, and it does not encode any proteins, only acting as a ribozyme to catalyze other biochemical reactions. Viruses Viruses are generally between 20–200 nm in diameter. For survival and replication, viruses inject their genome into host cells, insert those genes into the host genome, and hijack the host's machinery to produce hundreds of new viruses until the cell bursts open to release them for additional infections. The lytic cycle describes this active state of rapidly killing hosts, while the lysogenic cycle describes potentially hundreds of years of dormancy while integrated in the host genome. Alongside the taxonomy organized by the International Committee on Taxonomy of Viruses (ICTV), the Baltimore classification separates viruses by seven classes of mRNA production: I: dsDNA viruses (e.g., Adenoviruses, Herpesviruses, and Poxviruses) cause herpes, chickenpox, and smallpox II: ssDNA viruses (+ strand or "sense") DNA (e.g., Parvoviruses) include parvovirus B19 III: dsRNA viruses (e.g., Reoviruses) include rotaviruses IV: (+)ssRNA viruses (+ strand or sense) RNA (e.g., Coronaviruses, Picornaviruses, and Togaviruses) cause COVID-19, dengue fever, Hepatitis A, Hepatitis C, rubella, and yellow fever V: (−)ssRNA viruses (− strand or antisense) RNA (e.g., Orthomyxoviruses and Rhabdoviruses) cause ebola, influenza, measles, mumps, and rabies VI: ssRNA-RT viruses (+ strand or sense) RNA with DNA intermediate in life-cycle (e.g., Retroviruses) cause HIV/AIDS VII: dsDNA-RT viruses DNA with RNA intermediate in life-cycle (e.g., Hepadnaviruses) cause Hepatitis B Other parasites Protozoans are single-celled eukaryotes that feed on microorganisms and organic tissues. Many protozoans act as pathogenic parasites to cause diseases like malaria, amoebiasis, giardiasis, toxoplasmosis, cryptosporidiosis, trichomoniasis, Chagas disease, leishmaniasis, African trypanosomiasis (sleeping sickness), Acanthamoeba keratitis, and primary amoebic meningoencephalitis (naegleriasis). Parasitic worms (helminths) are macroparasites that can be seen by the naked eye. Worms live and feed in their living host, acquiring nutrients and shelter in the digestive tract or bloodstream of their host. They also manipulate the host's immune system by secreting immunomodulatory products which allows them to live in their host for years. Helminthiasis is the generalized term for parasitic worm infections, which typically involve roundworms, tapeworms, and flatworms. Pathogen hosts Bacteria While bacteria are typically viewed as pathogens, they serve as hosts to bacteriophage viruses (commonly known as phages). The bacteriophage life cycle involves the viruses injecting their genome into bacterial cells, inserting those genes into the bacterial genome, and hijacking the bacteria's machinery to produce hundreds of new phages until the cell bursts open to release them for additional infections. Typically, bacteriophages are only capable of infecting a specific species or strain. Streptococcus pyogenes uses a Cas9 nuclease to cleave foreign DNA matching the Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) associated with bacteriophages, removing the viral genes to avoid infection. This mechanism has been modified for artificial CRISPR gene editing. Plants Plants can play host to a wide range of pathogen types, including viruses, bacteria, fungi, nematodes, and even other plants. Notable plant viruses include the papaya ringspot virus, which has caused millions of dollars of damage to farmers in Hawaii and Southeast Asia, and the tobacco mosaic virus which caused scientist Martinus Beijerinck to coin the term "virus" in 1898. Bacterial plant pathogens cause leaf spots, blight, and rot in many plant species. The most common bacterial pathogens for plants are Pseudomonas syringae and Ralstonia solanacearum, which cause leaf browning and other issues in potatoes, tomatoes, and bananas. Fungi are another major pathogen type for plants. They can cause a wide variety of issues such as shorter plant height, growths or pits on tree trunks, root or seed rot, and leaf spots. Common and serious plant fungi include the rice blast fungus, Dutch elm disease, chestnut blight and the black knot and brown rot diseases of cherries, plums, and peaches. It is estimated that pathogenic fungi alone cause up to a 65% reduction in crop yield. Overall, plants have a wide array of pathogens and it has been estimated that only 3% of the disease caused by plant pathogens can be managed. Animals Animals often get infected with many of the same or similar pathogens as humans including prions, viruses, bacteria, and fungi. While wild animals often get illnesses, the larger danger is for livestock animals. It is estimated that in rural settings, 90% or more of livestock deaths can be attributed to pathogens. Animal transmissible spongiform encephalopathy (TSEs) involving prions include bovine spongiform encephalopathy (mad cow disease), chronic wasting disease, scrapie, transmissible mink encephalopathy, feline spongiform encephalopathy, and ungulate spongiform encephalopathy. Other animal diseases include a variety of immunodeficiency disorders caused by viruses related to human immunodeficiency virus (HIV), such as BIV and FIV. Humans Humans can be infected with many types of pathogens, including prions, viruses, bacteria, and fungi, causing symptoms like sneezing, coughing, fever, vomiting, and potentially lethal organ failure. While some symptoms are caused by the pathogenic infection, others are caused by the immune system's efforts to kill the pathogen, such as feverishly high body temperatures meant to denature pathogenic cells. Treatment Prions Despite many attempts, no therapy has been shown to halt the progression of prion diseases. Viruses A variety of prevention and treatment options exist for some viral pathogens. Vaccines are one common and effective preventive measure against a variety of viral pathogens. Vaccines prime the immune system of the host, so that when the potential host encounters the virus in the wild, the immune system can defend against infection quickly. Vaccines designed against viruses include annual influenza vaccines and the two-dose MMR vaccine against measles, mumps, and rubella. Vaccines are not available against the viruses responsible for HIV/AIDS, dengue, and chikungunya. Treatment of viral infections often involves treating the symptoms of the infection, rather than providing medication to combat the viral pathogen itself. Treating the symptoms of a viral infection gives the host immune system time to develop antibodies against the viral pathogen. However, for HIV, highly active antiretroviral therapy (HAART) is conducted to prevent the viral disease from progressing into AIDS as immune cells are lost. Bacteria Much like viral pathogens, infection by certain bacterial pathogens can be prevented via vaccines. Vaccines against bacterial pathogens include the anthrax vaccine and pneumococcal vaccine. Many other bacterial pathogens lack vaccines as a preventive measure, but infection by these bacteria can often be treated or prevented with antibiotics. Common antibiotics include amoxicillin, ciprofloxacin, and doxycycline. Each antibiotic has different bacteria that it is effective against and has different mechanisms to kill that bacteria. For example, doxycycline inhibits the synthesis of new proteins in both gram-negative and gram-positive bacteria, which makes it a broad-spectrum antibiotic capable of killing most bacterial species. Due to misuse of antibiotics, such as prematurely ended prescriptions exposing bacteria to evolutionary pressure under sublethal doses, some bacterial pathogens have developed antibiotic resistance. For example, a genetically distinct strain of Staphylococcus aureus called MRSA is resistant to the commonly prescribed beta-lactam antibiotics. A 2013 report from the Centers for Disease Control and Prevention (CDC) estimated that in the United States, at least 2 million people get an antibiotic-resistant bacterial infection annually, with at least 23,000 of those patients dying from the infection. Due to their indispensability in combating bacteria, new antibiotics are required for medical care. One target for new antimicrobial medications involves inhibiting DNA methyltransferases, as these proteins control the levels of expression for other genes, such as those encoding virulence factors. Fungi Infection by fungal pathogens is treated with anti-fungal medication. Athlete's foot, jock itch, and ringworm are fungal skin infections that are treated with topical anti-fungal medications like clotrimazole. Infections involving the yeast species Candida albicans cause oral thrush and vaginal yeast infections. These internal infections can either be treated with anti-fungal creams or with oral medication. Common anti-fungal drugs for internal infections include the echinocandin family of drugs and fluconazole. Algae While algae are commonly not thought of as pathogens, the genus Prototheca causes disease in humans. Treatment for protothecosis is currently under investigation, and there is no consistency in clinical treatment. Sexual interactions Many pathogens are capable of sexual interaction. Among pathogenic bacteria, sexual interaction occurs between cells of the same species by the process of genetic transformation. Transformation involves the transfer of DNA from a donor cell to a recipient cell and the integration of the donor DNA into the recipient genome through genetic recombination. The bacterial pathogens Helicobacter pylori, Haemophilus influenzae, Legionella pneumophila, Neisseria gonorrhoeae, and Streptococcus pneumoniae frequently undergo transformation to modify their genome for additional traits and evasion of host immune cells. Eukaryotic pathogens are often capable of sexual interaction by a process involving meiosis and fertilization. Meiosis involves the intimate pairing of homologous chromosomes and recombination between them. Examples of eukaryotic pathogens capable of sex include the protozoan parasites Plasmodium falciparum, Toxoplasma gondii, Trypanosoma brucei, Giardia intestinalis, and the fungi Aspergillus fumigatus, Candida albicans and Cryptococcus neoformans. Viruses may also undergo sexual interaction when two or more viral genomes enter the same host cell. This process involves pairing of homologous genomes and recombination between them by a process referred to as multiplicity reactivation. The herpes simplex virus, human immunodeficiency virus, and vaccinia virus undergo this form of sexual interaction. These processes of sexual recombination between homologous genomes supports repairs to genetic damage caused by environmental stressors and host immune systems.
Biology and health sciences
Infectious disease
null
47839266
https://en.wikipedia.org/wiki/Sichuan%20pepper
Sichuan pepper
Sichuan pepper (, also known as Sichuanese pepper, Szechuan pepper, Chinese prickly ash, Chinese pepper, Mountain pepper, and mala pepper, is a spice commonly used in Sichuan cuisine in China, Bhutan and in northeast India. It is called mejenga in Assam, India. It is called Thingey (ཐིང༌ངེ༌) in Bhutan and is used in preparing Ezay (a side dish similar to chutney), to add spiciness to rice porridge (ཐུགཔ་), Ba-thup and noodle (buckwheat noodles similar to Soba) and other snacks. It is extensively used in preparing Blood sausage throughout Bhutan, Tibet and China. Despite its name, Sichuan pepper is not closely related to black pepper or chili peppers. It is made from a plant of the genus Zanthoxylum in the family Rutaceae, which includes citrus and rue. When eaten, Sichuan pepper produces a tingling, numbing effect due to the presence of hydroxy-alpha sanshool. The spice has the effect of transforming other flavors tasted together or shortly after. It is used in Sichuan dishes such as mapo doufu and Chongqing hot pot, and is often added to chili peppers to create a flavor known as málà (; ). In Nepal, Timur or Timut pepper is a commonly used spice often confused with Sichuan pepper because they look similar and share some characteristics. Species and cultivars Sichuan peppers have been used for culinary and medicinal purposes in China for centuries with numerous Zanthoxylum species called (). Commonly used sichuan peppers in China include (), or red Sichuan peppercorns, which are harvested from Zanthoxylum bungeanum, and () or (), green Sichuan peppercorns, harvested from Zanthoxylum armatum. Fresh green Sichuan peppercorns are also known as (). Red Sichuan pepper is typically characterized as stronger-tasting, while green Sichuan pepper is milder but fragrant and has a stronger numbing effect. Over the years, Chinese farmers have cultivated multiple strains of these two varieties. Zanthoxylum simulans, known as Chinese-pepper or flatspine prickly-ash, is the source of another red Sichuan peppercorn. Zanthoxylum armatum is found throughout the Himalayas, from Kashmir to Bhutan, as well as in Taiwan, Nepal, China, Philippines, Malaysia, Japan, and Pakistan, and is known by a variety of regional names, including () in Nepali and Hindko, () in Tibetan and in Bhutan. Other Zanthoxylum spices Zanthoxylum gilletii is an African species of Zanthoxylum used to produce spice uzazi. Similarly, other Zanthoxylum species are harvested for spice and seasoning production in a number of cultures and culinary traditions. These spices include andaliman, chopi, sancho, sanshō, teppal, and tirphal. Culinary uses Sichuan pepper is an important spice in Chinese, Nepali, Kashmiri, north east Indian, Tibetan, and Bhutanese cookery of the Himalayas. Sichuan pepper has a citrus-like flavor and induces a tingling numbness in the mouth, akin to a 50-hertz vibration, due to the presence of hydroxy-alpha sanshool. Food historian Harold McGee describes the effect of sanshools thus: Chinese cuisine Whole, green, freshly picked Sichuan pepper may be used in cooking, but dried Sichuan pepper is more commonly used. Once dried, the shiny black seeds inside the husk are discarded, along with any stems; the husk is what we know as Sichuan pepper or peppercorn. The peppercorn may be used whole or finely ground, as it is in five-spice powder. Ma la sauce (; ), common in Sichuan cooking, is a combination of Sichuan pepper and chili pepper, and it is a key ingredient in Chongqing hot pot. Sichuan pepper is also available as an oil (, marketed as either "Sichuan pepper oil", "Bunge prickly ash oil", or "huajiao oil"). Sichuan pepper infused oil can be used in dressing, dipping sauces, or any dish in which the flavor of the peppercorn is desired without the texture of the peppercorns themselves. () is a mixture of salt and Sichuan pepper, toasted and browned in a wok, and served as a condiment to accompany chicken, duck, and pork dishes. The leaves of the sichuan pepper tree are also used in soups and fried foods. Other regions One Himalayan specialty is the momo, a dumpling stuffed with vegetables, cottage cheese, or minced yak or beef, and flavored with Sichuan pepper, garlic, ginger, and onion. In Nepal, the mala flavor is known as (). In Korean cuisine, is often used to accompany fish soups such as chueo-tang. In Indonesian Batak cuisine, andaliman is ground and mixed with chilies and seasonings into a green sambal or chili paste. Arsik is a typical Indonesian dish containing andaliman. Medicinal uses In Traditional Chinese medicine, Zanthoxylum bungeanum has been used as a herbal remedy. It is listed in the Pharmacopoeia of the People's Republic of China and is prescribed for ailments as various as abdominal pains, toothache, and eczema. However, Sichuan pepper has no indications or accepted case for use in evidence-based medicine. Research has revealed that Z. bungeanum can have analgesic, anti-inflammatory, antibacterial, and antioxidant effects in model animals and cell cultures. In rabbits, Z. armatum was experimentally investigated for its potential use in treating gastrointestinal, respiratory, and cardiovascular disorders. Phytochemistry Important compounds of various Zanthoxylum species include: Zanthoxylum fagara (Central & Southern Africa, South America) — alkaloids, coumarins (Phytochemistry, 27, 3933, 1988) Zanthoxylum simulans (Taiwan) — Mostly beta-myrcene, limonene, 1,8-cineole, Z-beta-ocimene (J. Agri. & Food Chem., 44, 1096, 1996) Zanthoxylum armatum (Nepal) — linalool (50%), limonene, methyl cinnamate, cineole Zanthoxylum rhetsa (India) — Sabinene, limonene, pinenes, para-cymene, terpinenes, 4-terpineol, alpha-terpineol. (Zeitschrift f. Lebensmitteluntersuchung und -forschung A, 206, 228, 1998) Zanthoxylum piperitum (Japan [leaves]) — citronellal, citronellol, Z-3-hexenal (Bioscience, Biotechnology, and Biochemistry, 61, 491, 1997) Zanthoxylum acanthopodium (Indonesia) — citronellal, limonene Historical US import ban From 1968 to 2005, the United States Food and Drug Administration banned the importation of Sichuan peppercorns because they were found to be capable of carrying citrus canker (as the tree is in the same family, Rutaceae, as the genus Citrus). This bacterial disease, which is very difficult to control, could potentially harm the foliage and fruit of citrus crops in the U.S. The import ban was only loosely enforced until 2002. In 2005, the USDA and FDA allowed imports, provided the peppercorns were heated for ten minutes to approximately to kill any canker bacteria. Starting in 2007, the USDA no longer required peppercorns to be heated, fully ending the import ban on peppercorns.
Biology and health sciences
Herbs and spices
Plants
47840567
https://en.wikipedia.org/wiki/Allium
Allium
Allium is a large genus of monocotyledonous flowering plants with around 1000 accepted species, making Allium the largest genus in the family Amaryllidaceae and amongst the largest plant genera in the world. Many of the species are edible, and some have a long history of cultivation and human consumption as a vegetable including the onion, garlic, scallions, shallots, leeks, and chives, with onions being the second most grown vegetable globally after tomatoes as of 2023. Allium species occur in temperate climates of the Northern Hemisphere, except for a few species occurring in Chile (such as A. juncifolium), Brazil (A. sellovianum), and tropical Africa (A. spathaceum). They vary in height between . The flowers form an umbel at the top of a leafless stalk. The bulbs vary in size between species, from small (around 2–3 mm in diameter) to rather large (8–10 cm). Some species (such as Welsh onion A. fistulosum and leeks (A. ampeloprasum)) develop thickened leaf-bases rather than forming bulbs as such. Carl Linnaeus first described the genus Allium in 1753. The generic name Allium is the Latin word for garlic, and the type species for the genus is Allium sativum which means "cultivated garlic". The decision to include a species in the genus Allium is taxonomically difficult, and species boundaries are unclear. Estimates of the number of species are as low as 260, and as high as 979. In the APG III classification system, Allium is placed in the family Amaryllidaceae, subfamily Allioideae (formerly the family Alliaceae). In some of the older classification systems, Allium was placed in Liliaceae. Molecular phylogenetic studies have shown this circumscription of Liliaceae is not monophyletic. Various Allium species have been cultivated from the earliest times. About a dozen species are economically important as crops, or garden vegetables, and an increasing number of species are important as ornamental plants. Plants of the genus produce chemical compounds, mostly derived from cysteine sulfoxides, that give them a characteristic onion or garlic taste and odor. Many are used as food plants, though not all members of the genus are equally flavorful. In most cases, both bulb and leaves are edible. The characteristic Allium flavor depends on the sulfate content of the soil the plant grows in. In the rare occurrence of sulfur-free growth conditions, all Allium species completely lose their usual pungency. Description The genus Allium (alliums) is characterised by herbaceous geophyte perennials with true bulbs, some of which are borne on rhizomes, and an onion or garlic odor and flavor. The bulbs are solitary or clustered and tunicate and the plants are perennialized by the bulbs reforming annually from the base of the old bulbs, or are produced on the ends of rhizomes or, in a few species, at the ends of stolons. A small number of species have tuberous roots. The bulbs' outer coats are commonly brown or grey, with a smooth texture, and are fibrous, or with cellular reticulation. The inner coats of the bulbs are membranous. Many alliums have basal leaves that commonly wither away from the tips downward before or while the plants flower, but some species have persistent foliage. Plants produce from one to 12 leaves, most species having linear, channeled or flat leaf blades. The leaf blades are straight or variously coiled, but some species have broad leaves, including A. victorialis and A. tricoccum. The leaves are sessile, and very rarely narrowed into a petiole. The flowers, which are produced on scapes are erect or in some species pendent, having six petal-like tepals produced in two whorls. The flowers have one style and six epipetalous stamens; the anthers and pollen can vary in color depending on the species. The ovaries are superior, and three-lobed with three locules. The fruits are capsules that open longitudinally along the capsule wall between the partitions of the locule. The seeds are black, and have a rounded shape. The terete or flattened flowering scapes are normally persistent. The inflorescences are umbels, in which the outside flowers bloom first and flowering progresses to the inside. Some species produce bulbils within the umbels, and in some species, such as Allium paradoxum, the bulbils replace some or all the flowers. The umbels are subtended by noticeable spathe bracts, which are commonly fused and normally have around three veins. Some bulbous alliums increase by forming little bulbs or "offsets" around the old one, as well as by seed. Several species can form many bulbils in the flowerhead; in the so-called "tree onion" or Egyptian onion (A. × proliferum) the bulbils are few, but large enough to be pickled. Many of the species of Allium have been used as food items throughout their ranges. There are several unrelated species that are somewhat similar in appearance to Alliums but are poisonous (e.g. in North America, death camas, Toxicoscordion venenosum), but none of these has the distinctive scent of onions or garlic. Taxonomy With over 850 species Allium is the sole genus in the Allieae, one of four tribes of subfamily Allioideae (Amaryllidaceae). New species continue to be described and Allium is one of the largest monocotyledonous genera, but the precise taxonomy of Allium is poorly understood, with incorrect descriptions being widespread. The difficulties arise from the fact that the genus displays considerable polymorphism and has adapted to a wide variety of habitats. Furthermore, traditional classifications had been based on homoplasious characteristics (the independent evolution of similar features in species of different lineages). However, the genus has been shown to be monophyletic, containing three major clades, although some proposed subgenera are not. Some progress is being made using molecular phylogenetic methods, and the internal transcribed spacer (ITS) region, including the 5.8S rDNA and the two spacers ITS1 and ITS2, is one of the more commonly used markers in the study of the differentiation of the Allium species. Allium includes a number of taxonomic groupings previously considered separate genera (Caloscordum Herb., Milula Prain and Nectaroscordum Lindl.) Allium spicatum had been treated by many authors as Milula spicata, the only species in the monospecific genus Milula. In 2000, it was shown to be embedded in Allium. Phylogeny History When Linnaeus formerly described the genus Allium in his Species Plantarum (1753), there were thirty species with this name. He placed Allium in a grouping he referred to as Hexandria monogynia (i.e. six stamens and one pistil) containing 51 genera in all. Subdivision Linnaeus originally grouped his 30 species into three alliances, e.g. Foliis caulinis planis. Since then, many attempts have been made to divide the growing number of recognised species into infrageneric subgroupings, initially as sections, and then as subgenera further divided into sections. For a brief history, see Li et al. (2010) The modern era of phylogenetic analysis dates to 1996. In 2006 Friesen, Fritsch, and Blattner described a new classification with 15 subgenera, 56 sections, and about 780 species based on the nuclear ribosomal gene internal transcribed spacers. Some of the subgenera correspond to the once separate genera (Caloscordum, Milula, Nectaroscordum) included in the Gilliesieae. The terminology has varied with some authors subdividing subgenera into Sections and others Alliances. The term Alliance has also been used for subgroupings within species, e.g. Allium nigrum, and for subsections. Subsequent molecular phylogenetic studies have shown the 2006 classification is a considerable improvement over previous classifications, but some of its subgenera and sections are probably not monophyletic. Meanwhile, the number of new species continued to increase, reaching 800 by 2009, and the pace of discovery has not decreased. Detailed studies have focused on a number of subgenera, including Amerallium. Amerallium is strongly supported as monophyletic. Subgenus Melanocrommyum has also been the subject of considerable study (see below), while work on subgenus Allium has focussed on section Allium, including Allium ampeloprasum, although sampling was not sufficient to test the monophyly of the section. The major evolutionary lineages or lines correspond to the three major clades. Line one (the oldest) with three subgenera is predominantly bulbous, the second, with five subgenera and the third with seven subgenera contain both bulbous and rhizomatous taxa. Evolutionary lines and subgenera The three evolutionary lineages and 15 subgenera here represent the classification schemes of Friesen et al. (2006) and Li (2010), and subsequent additional species and revisions. Evolutionary lines and subgenera (number of sections/number of species) First evolutionary line (3 subgenera) Nectaroscordum (Lindl.) Asch. et Graebn Type: Allium siculum (1/3) Mediterranean bells, Sicilian honey garlic Microscordum (Maxim.) N. Friesen Type: Allium monanthum (1/1) Amerallium Traub Type: Allium canadense (12/135) Second evolutionary line (5 subgenera) Caloscordum (Herb.) R. M. Fritsch Type: Allium neriniflorum (1/3) Anguinum (G. Don ex Koch) N. Friesen Type: Allium victorialis (1/12) Porphyroprason (Ekberg) R. M. Fritsch Type: Allium oreophilum (1/1) Vvedenskya (Kamelin) R. M. Fritsch Type: Allium kujukense (1/1) Melanocrommyum (Webb et Berthel.) Rouy Type: Allium nigrum (20/160) Third evolutionary line (7 subgenera) Butomissa (Salisb.) N. Friesen Type: Allium ramosum (2/4) fragrant garlic Cyathophora R. M. Fritsch Type: Allium cyathophorum (3/5) Rhizirideum (G. Don ex Koch) Wendelbo s.s Type: Allium senescens (5/37) Allium L. Type: Allium sativum (15/300) Reticulatobulbosa (Kamelin) N. Friesen Type: Allium lineare (5/80) Polyprason Radic Type: Allium moschatum (4/50) Cepa (Mill.) Radic ́ Type: Allium cepa (5/30) onion, garden onion, bulb onion, common onion First evolutionary line Although this lineage consists of three subgenera, nearly all the species are attributed to subgenus Amerallium, the third largest subgenus of Allium. The lineage is considered to represent the most ancient line within Allium, and to be the only lineage that is purely bulbous, the other two having both bulbous and rhizomatous taxa. Within the lineage Amerallium is a sister group to the other two subgenera (Microscordum+Nectaroscordum). Second evolutionary line Nearly all the species in this lineage of five subgenera are accounted for by subgenus Melanocrommyum, which is most closely associated with subgenera Vvedenskya and Porphyroprason, phylogenetically. These three genera are late-branching whereas the remaining two subgenera, Caloscordum and Anguinum, are early branching. Third evolutionary line The third evolutionary line contains the greatest number of sections (seven), and also the largest subgenus of the genus Allium: subgenus Allium, which includes the type species of the genus, Allium sativum. This subgenus also contains the majority of the species in its lineage. Within the lineage, the phylogeny is complex. Two small subgenera, Butomissa and Cyathophora form a sister clade to the remaining five subgenera, with Butomissa as the first branching group. Amongst the remaining five subgenera, Rhizirideum forms a medium-sized subgenus that is the sister to the other four, larger, subgenera. This line may not be monophyletic. Proposed infrageneric groups Names from Allium sect. Acanthoprason Wendelbo Allium subsect. Acuminatae Ownbey ex Traub Allium sect. Amerallium Traub Allium sect. Anguinum G. Don Allium sect. Brevispatha Vals. Allium sect. Briseis Stearn Allium sect. Bromatorrhiza Ekberg Allium sect. Caloscordum Baker Allium subsect. Campanulatae Ownbey ex Traub Allium sect. Caulorhizideum Traub Allium subsect. Cepa Stearn Allium subsect. Cernuae Rchb. Allium sect. Codonoprasum Ekberg Allium sect. Falcatifolia N. Friesen Allium subsect. Falcifoliae Ownbey ex Traub Allium sect. Halpostemon Boiss. Allium sect. Haneltia F.O. Khass. Allium sect. Lophioprason Traub. Allium subg. Melanocrommyon (Webb & Berthel.) Rouy Allium subsect. Mexicana Traub Allium sect. Molium G. Don ex W.D.J. Koch Allium sect. Multicaulea F.O. Khass. & Yengal. Allium sect. Oreiprason F. Herm. Allium sect. Petroprason F. Herm. Allium subg. Polyprason Radic Allium sect. Porrum G. Don Allium sect. Rhiziridium G. Don ex W.D.J. Koch Allium sect. Rhophetoprason Traub Allium subsect. Sanbornae Ownbey ex Traub Allium sect. Schoenoprasum Dumort. Allium sect. Scorodon Allium sect. Unicaulea F.O. Khass. Etymology Some sources refer to Greek ἀλέω (aleo, to avoid) due to the odor of garlic. Distribution and habitat The majority of Allium species are native to the Northern Hemisphere, being spread throughout the holarctic region, from dry subtropics to the boreal zone, predominantly in Asia. Of the latter, 138 species occur in China, about a sixth of all Allium species, representing five subgenera. A few species are native to Africa and Central and South America. A single known exception, Allium dregeanum occurs in the Southern Hemisphere (South Africa). There are two centres of diversity, a major one from the Mediterranean Basin to Central Asia and Pakistan, while a minor one is found in western North America. The genus is especially diverse in the eastern Mediterranean. Ecology Species grow in various conditions from dry, well-drained mineral-based soils to moist, organic soils; most grow in sunny locations, but a number also grow in forests (e.g., A. ursinum), or even in swamps or water. Various Allium species are used as food plants by the larvae of the leek moth and onion fly as well as other Lepidoptera including cabbage moth, common swift moth (recorded on garlic), garden dart moth, large yellow underwing moth, nutmeg moth, setaceous Hebrew character moth, turnip moth and Schinia rosea, a moth that feeds exclusively on Allium species. Genetics The genus Allium has very large variation between species in their genome size that is not accompanied by changes in ploidy level. This remarkable variation was noted in the discussion of the evolution of junk DNA and resulted in the Onion Test, a "reality check for anyone who thinks they have come up with a universal function for junk DNA". Genome sizes vary between 7.5 Gb in A. schoenoprasum and 30.9 Gb in A. ursinum, both of which are diploid. Telomere The unusual telomeric sequence of 'Allium cepa' was discovered and cytologically validated to be CTCGGTTATGGG A bioinformatics method for detecting this unique telomere sequence was demonstrated using SERF de novo Genome Analysis Cultivation Many Allium species have been harvested through human history, but only about a dozen are still economically important today as crops or garden vegetables. Ornamental Many Allium species and hybrids are cultivated as ornamentals. These include A. cristophii and A. giganteum, which are used as border plants for their ornamental flowers, and their "architectural" qualities. Several hybrids have been bred, or selected, with rich purple flowers. A. hollandicum 'Purple Sensation' is one of the most popular and has been given an Award of Garden Merit (H4). These ornamental onions produce spherical umbels on single stalks in spring and summer, in a wide variety of sizes and colours, ranging from white (Allium 'Mont Blanc'), blue (A. caeruleum), to yellow (A. flavum) and purple (A. giganteum). By contrast, other species (such as invasive A. triquetrum and A. ursinum) can become troublesome garden weeds. The following cultivars, of uncertain or mixed parentage, have gained the Royal Horticultural Society's Award of Garden Merit: 'Ambassador' 'Beau Regard' 'Gladiator' 'Globemaster' 'Michael H. Hoog' (A. rosenorum) 'Round 'n' Purple' 'Universe' Toxicity Dogs and cats are very susceptible to poisoning after the consumption of certain species. Even cattle have suffered onion toxicosis. Vegetables of the Allium genus can cause digestive disorders for human beings. Uses The genus includes many economically important species. These include onions (A. cepa), French shallots (A. oschaninii), leeks (A. ampeloprasum), garlic (A. sativum), and herbs such as scallions (various Allium species) and chives (A. schoenoprasum). Some have been used as traditional medicines. This genus also includes species that are abundantly gathered from the wild such as wild garlic (Allium ursinum) in Europe and ramps (Allium tricoccum) in North America.
Biology and health sciences
Asparagales
Plants
47840723
https://en.wikipedia.org/wiki/Leek
Leek
A leek is a vegetable, a cultivar of Allium ampeloprasum, the broadleaf wild leek (syn. Allium porrum). The edible part of the plant is a bundle of leaf sheaths that is sometimes erroneously called a stem or stalk. The genus Allium also contains the onion, garlic, shallot, scallion, chives, and Chinese onion. Three closely related vegetables, elephant garlic, kurrat and Persian leek or tareh, are also cultivars of A. ampeloprasum, although different in their culinary uses. Etymology Historically, many scientific names were used for leeks, but they are now all treated as cultivars of A. ampeloprasum. The name leek developed from the Old English word , from which the modern English name for garlic also derives. means 'onion' in Old English and has cognates in other Germanic languages: Danish 'onion', Icelandic 'onion', Norwegian 'onion', Swedish 'onion', German 'leek', Dutch 'Allium (any plant of this genus)'. Cultivation Leeks must be grown in soil that is loose and drained well; they can be grown in the same regions where onions can be grown. Leeks may be seeded directly, but are more typically sown at high density in seed-beds before being transplanted into the field. This happens at 12 weeks, when they have reached the thickness of a pencil. The optimum temperature for growth is around . Leeks are more cold-tolerant than other cultivated Allium species and can be produced year-round in Europe. They tolerate standing in the field for an extended harvest, which takes place up to 6 months from planting. Pests and diseases Leeks suffer from insect pests, including the thrips species Thrips tabaci and the leek moth. Leeks are also susceptible to leek rust (Puccinia allii). Damage from thrips is greatest when under water stress in hot, dry weather. In these conditions, insect reproduction occurs quickly while plant growth is slowed. Thrips can be controlled by chemical pesticides and by intercropping with legumes or other plants. Varieties Leek cultivars may be treated as a single cultivar group, e.g., as A. ampeloprasum 'Leek Group'. The cultivars can be subdivided in several ways, but the most common types are "summer leeks", intended for harvest in the season when planted, and overwintering leeks, meant to be harvested in the spring of the year following planting. Summer leek types are generally smaller than overwintering types; overwintering types are generally more strongly flavored. Cultivars include 'King Richard' and 'Tadorna Blue'. Culinary use Leeks have a mild, onion-like taste. In its raw state, the vegetable is crunchy and firm. The edible portions of the leek are the white base of the leaves (above the roots and stem base), the light green parts, and to a lesser extent, the dark green parts of the leaves. The dark green portion is usually discarded because it has a tough texture, but it can be sautéed or more commonly added to stock for flavor. A few leaves are sometimes tied with twine and other herbs to form a bouquet garni. Leeks are typically chopped into slices 5–10 mm thick. The slices tend to fall apart due to the layered structure of the leek. The different ways of preparing the vegetable are: Boiling turns it soft and mild in taste. Whole boiled leeks, served cold with vinaigrette, are popular in France, where leeks are nicknamed asperges du pauvre 'poor man's asparagus'. Frying leaves it crunchier and preserves the taste. Raw leeks can be used in salads, doing especially well when they are the prime ingredient. In Turkish cuisine, leeks are chopped into thick slices, then boiled and separated into leaves, and finally filled with a filling usually containing rice, herbs (generally parsley and dill), onion, and black pepper. For sarma with olive oil, currants, pine nuts, and cinnamon are added, and for sarma with meat, minced meat is added to the filling. In Turkey, especially zeytinyağlı pırasa (leek with olive oil), ekşili pırasa (sour leek), etli pırasa (leek with meat), pırasa musakka (leek musakka), pırasalı börek (börek with leek), and pırasa köftesi (leek meatballs) are also cooked. Papet Vaudois consists of boiled leeks and potatoes. It is the emblematic dish of the Canton of Vaud. Keftikas de Prasa, or leek patties, are a staple of Sephardic Jewish cuisine and are served on holidays such as Rosh HaShana and Passover. Leeks are an ingredient of cock-a-leekie soup, leek and potato soup, and vichyssoise, as well as plain leek soup. Because of their symbolism in Wales (see below), they have come to be used extensively in that country's cuisine. Elsewhere in Britain, leeks have come back in favor only in the last 50 years, having been overlooked for several centuries. Nutrition Raw leek (bulb and lower leaves) is 83% water, 14% carbohydrates, 1% protein, and contains negligible fat (table). A reference amount supplies of food energy and is a rich source (20% or more of the Daily Value, DV) of vitamin K (45% DV) and manganese (23% DV). It is a moderate source (10–19% DV) of vitamin B6, folate, vitamin C, and iron (table). Historical consumption The Hebrew Bible talks of , identified by commentators as leek, and says it is abundant in Egypt. Dried specimens from archaeological sites in ancient Egypt, as well as wall carvings and drawings, indicate that the leek was a part of the Egyptian diet from at least the second millennium BCE. Texts also show that it was grown in Mesopotamia from the beginning of the second-millennium BCE. Leeks were eaten in ancient Rome and regarded as superior to garlic and onions. The 1st century CE cookbook Apicius contains four recipes involving leeks. Raw leek was the favorite vegetable of the Emperor Nero, who consumed it in soup or oil, believing it beneficial to the quality of his voice. This earned him the nickname "Porrophagus" or "Leek Eater". Cultural significance The leek is one of the national emblems of Wales, and it or the daffodil (in Welsh, the daffodil is known as "Peter's leek", Cenhinen Bedr) is worn on St. David's Day. According to one Welsh myth, King Cadwaladr of Gwynedd ordered his soldiers to identify themselves by wearing the vegetable on their helmets in an ancient battle against the Saxons that took place in a leek field. The Elizabethan poet Michael Drayton stated, in contrast, that the tradition was a tribute to Saint David, who ate only leeks when he was fasting. The leek () has been known to be a symbol of Wales for a long time; Shakespeare, for example, refers to the custom of wearing a leek as an "ancient tradition" in Henry V (). In the play, Henry V tells the Welsh officer Fluellen that he, too, is wearing a leek "for I am Welsh, you know, good countryman." The 1985 and 1990 British one pound coins bear the design of a leek in a coronet, representing Wales. One version of the 2013 British one pound coin shows a leek with a daffodil. Alongside the other national floral emblems of countries currently and formerly in the Commonwealth or part of the United Kingdom (including the English Tudor Rose, Scottish thistle, Irish shamrock, Canadian maple leaf, and Indian lotus), the Welsh leek appeared on the coronation gown of Elizabeth II. Norman Hartnell designed it; when Hartnell asked if he could exchange the leek for the more aesthetically pleasing Welsh daffodil, he was told no. Perhaps the most visible use of the leek, however, is as the cap badge of the Welsh Guards, a battalion within the Household Division of the British Army. In Romania, the leek is also widely considered a symbol of Oltenia, a historical region in the country's southwestern part. Gallery
Biology and health sciences
Monocots
null
23396816
https://en.wikipedia.org/wiki/Pound%20per%20square%20inch
Pound per square inch
The pound per square inch (abbreviation: psi) or, more accurately, pound-force per square inch (symbol: lbf/in2), is a unit of measurement of pressure or of stress based on avoirdupois units and used primarily in the United States. It is the pressure resulting from a force with magnitude of one pound-force applied to an area of one square inch. In SI units, 1 psi is approximately . The pound per square inch absolute (psia) is used to make it clear that the pressure is relative to a vacuum rather than the ambient atmospheric pressure. Since atmospheric pressure at sea level is around , this will be added to any pressure reading made in air at sea level. The converse is pound per square inch gauge (psig), indicating that the pressure is relative to atmospheric pressure. For example, a bicycle tire pumped up to 65 psig in a local atmospheric pressure at sea level (14.7 psi) will have a pressure of 79.7 psia (14.7 psi + 65 psi). When gauge pressure is referenced to something other than ambient atmospheric pressure, then the unit is pound per square inch differential (psid). Multiples The kilopound per square inch (ksi) is a scaled unit derived from psi, equivalent to a thousand psi (1000 lbf/in2). ksi are not widely used for gas pressures. They are mostly used in materials science, where the tensile strength of a material is measured as a large number of psi. The conversion in SI units is 1 ksi = 6.895 MPa, or 1 MPa = 0.145 ksi. The megapound per square inch (Mpsi) is another multiple equal to a million psi. It is used in mechanics for the elastic modulus of materials, especially for metals. The conversion in SI units is 1 Mpsi = 6.895 GPa, or 1 GPa = 0.145 Mpsi. Magnitude Inch of water: 0.036 psid Blood pressure – clinically normal human blood pressure (120/80 millimetre of mercury (mmHg): 2.32 psig/1.55 psig Natural gas residential piped in for consumer appliance; 4–6 psig. Boost pressure provided by an automotive turbocharger (common): 6–15 psig NFL football: 12.5–13.5 psig Atmospheric pressure at sea level (standard): 14.7 psia Automobile tire overpressure (common): 32 psig Bicycle tire overpressure (common): 65 psig Workshop or garage air tools: 90 psig Railway air brakes or road brakes reservoir overpressure (common): 90–120 psig Road racing bicycle tire overpressure: 120 psig Steam locomotive fire tube boiler (UK, 20th century): 150–280 psig Union Pacific Big Boy steam locomotive boiler: 300 psig US Navy steam boiler pressure: 800 psi Natural gas pipelines: 800–1,000 psig Full SCBA (self-contained breathing apparatus) for IDLH (non-fire) atmospheres: 2,216 psig Nuclear reactor primary loop: 2300 psi Full SCUBA (self-contained underwater breathing apparatus) tank overpressure (common): 3,000 psig Full SCBA (self-contained breathing apparatus) for interior firefighting operations: 4,500 psig Airbus A380 hydraulic system: 5,000 psig Land Rover Td5 diesel engine fuel injection pressure: 22,500 psi Ultimate tensile strength of ASTM A36 steel: 58,000 psi Water jet cutter: 40,000–100,000 psig Conversions The conversions to and from SI are computed from exact definitions but result in a repeating decimal. As the pascal is a very small unit relative to industrial pressures, the kilopascal is commonly used. 1000 kPa ≈ 145 lbf/in2. Approximate conversions (rounded to some arbitrary number of digits, except when denoted by "≡") are shown in the following table.
Physical sciences
Pressure
Basics and measurement
47862520
https://en.wikipedia.org/wiki/Solanaceae
Solanaceae
The Solanaceae (), or the nightshades, is a family of flowering plants that ranges from annual and perennial herbs to vines, lianas, epiphytes, shrubs, and trees, and includes a number of agricultural crops, medicinal plants, spices, weeds, and ornamentals. Many members of the family contain potent alkaloids, and some are highly toxic, but many—including tomatoes, potatoes, eggplant, bell, and chili peppers—are used as food. The family belongs to the order Solanales, in the asterid group and class Magnoliopsida (dicotyledons). The Solanaceae consists of about 98 genera and some 2,700 species, with a great diversity of habitats, morphology and ecology. The name Solanaceae derives from the genus Solanum. The etymology of the Latin word is unclear. The name may come from a perceived resemblance of certain solanaceous flowers to the sun and its rays. At least one species of Solanum is known as the "sunberry". Alternatively, the name could originate from the Latin verb solare, meaning "to soothe", presumably referring to the soothing pharmacological properties of some of the psychoactive species of the family. The Solanaceae family includes a number of commonly collected or cultivated species. The most economically important genus of the family is Solanum, which contains the potato (S. tuberosum, in fact, another common name of the family is the "potato family"), the tomato (S. lycopersicum), and the eggplant or aubergine (S. melongena). Another important genus, Capsicum, produces both chili peppers and bell peppers. The genus Physalis produces the so-called groundcherries, as well as the tomatillo (Physalis philadelphica) and Physalis peruviana (Cape gooseberry). Alkekengi officinarum (Chinese Lantern) was previously included in the genus Physalis (as Physalis alkekengi), until molecular and genetic evidence placed it as the type species of a new genus. The genus Lycium contains the boxthorns and the goji berry, Lycium barbarum. Nicotiana contains, among other species, tobacco. Some other important members of Solanaceae include a number of ornamental plants such as Petunia, Browallia, and Lycianthes, and sources of psychoactive alkaloids, Datura, Mandragora (mandrake), and Atropa belladonna (deadly nightshade). Certain species are widely known for their medicinal uses, their psychotropic effects, or for being poisonous. This family has a worldwide distribution, being present on all continents except Antarctica. The greatest diversity in species is found in South America and Central America. In 2017, scientists reported on their discovery and analysis of a fossil species belonging to the living genus Physalis, Physalis infinemundi, found in the Patagonian region of Argentina, dated to 52 million years ago. The finding has pushed back the earliest appearance of the plant family Solanaceae. Most of the economically important genera are contained in the subfamily Solanoideae, with the exceptions of tobacco (Nicotiana tabacum, Nicotianoideae) and petunia (Petunia × hybrida, Petunioideae). Many of the Solanaceae, such as tobacco and petunia, are used as model organisms in the investigation of fundamental biological questions at the cellular, molecular, and genetic levels. Description Plants in the Solanaceae can take the form of herbs, shrubs, trees, vines and lianas, and sometimes epiphytes. They can be annuals, biennials, or perennials, upright or decumbent. Some have subterranean tubers. They do not have laticifers, nor latex, nor coloured saps. They can have a basal or terminal group of leaves or neither of these types. The leaves are generally alternate or alternate to opposed (that is, alternate at the base of the plant and opposed towards the inflorescence). The leaves can be herbaceous, leathery, or transformed into spines. The leaves are generally petiolate or subsessile, rarely sessile. They are frequently inodorous, but some are aromatic or fetid. The foliar lamina can be either simple or compound, and the latter can be either pinnatifid or ternate. The leaves have reticulated venation and lack a basal meristem. The laminae are generally dorsiventral and lack secretory cavities. The stomata are generally confined to one of a leaf's two sides; they are rarely found on both sides. The flowers are generally hermaphrodites, although some are monoecious, andromonoecious, or dioecious species (such as some Solanum or Symonanthus). Pollination is entomophilous. The flowers can be solitary or grouped into terminal, cymose, or axillary inflorescences. The flowers are medium-sized, fragrant (Nicotiana), fetid (Anthocercis), or inodorous. The flowers are usually actinomorphic, slightly zygomorphic, or markedly zygomorphic (for example, in flowers with a bilabial corolla in Schizanthus species). The irregularities in symmetry can be due to the androecium, to the perianth, or both at the same time. In the great majority of species, the flowers have a differentiated perianth with a calyx and corolla (with five sepals and five petals, respectively) an androecium with five stamens and two carpels forming a gynoecium with a superior ovary (they are therefore referred to as pentamers and tetracyclic). The stamens are epipetalous and are typically present in multiples of four or five, most commonly four or eight. They usually have a hypogynous disk. The calyx is gamosepalous (as the sepals are joined forming a tube), with the (4)5(6) segments equal, it has five lobes, with the lobes shorter than the tube, it is persistent and often accrescent. The corolla usually has five petals that are also joined forming a tube. Flower shapes are typically rotate (wheel-shaped, spreading in one plane, with a short tube) or tubular (elongated cylindrical tube), campanulated, or funnel-shaped. The androecium has (2)(4)5(6) free stamens within its opposite sepals (they alternate with the petals). They are usually fertile or, in some cases (for example in Salpiglossideae) they have staminodes. In the latter case, there is usually either one staminode (Salpiglossis) or three (Schizanthus). The anthers touch on their upper end forming a ring, or they are completely free, dorsifixed, or basifixed with poricide dehiscence or through small longitudinal cracks. The stamen's filament can be filiform or flat. The stamens can be inserted inside the coralline tube or exserted. The plants demonstrate simultaneous microsporogenesis, the microspores are tetrad, tetrahedral, or isobilateral. The pollen grains are bicellular at the moment of dehiscence, usually open and angular. The gynoecium is bicarpelar (rarely three- or five-locular) with a superior ovary and two locules, which may be secondarily divided by false septa, as is the case for Nicandreae and Datureae. The gynoecium is located in an oblique position relative to the flower's median plane. They have one style and one stigma; the latter is simple or bilobate. Each locule has one to 50 ovules that are anatropous or hemianatropous with axillar placentation. The development of the embryo sack can be the same as for Polygonum or Allium species. The embryo sack's nuclear poles become fused before fertilization. The three antipodes are usually ephemeral or persistent as in the case of Atropa. The fruit can be a berry as in the case of the tomato or wolfberry, or a dehiscent capsule as in Datura, or a drupe. The fruit has axial placentation. The capsules are normally septicidal or rarely loculicidal or valvate. The seeds are usually endospermic, oily (rarely starchy), and without obvious hairs. The seeds of most Solanaceae are round and flat, about in diameter. The embryo can be straight or curved, and has two cotyledons. Most species in the Solanaceae have 2n=24 chromosomes, but the number may be a higher multiple of 12 due to polyploidy. Wild potatoes, of which there are about 200, are predominantly diploid (2 × 12 = 24 chromosomes), but triploid (3 × 12 = 36 chromosomes), tetraploid (4 × 12 = 48 chromosomes), pentaploid (5 × 12 = 60) and even hexaploid (6 × 12 = 72 chromosome) species or populations exist. The cultivated species Solanum tuberosum has 4 × 12 = 48 chromosomes. Some Capsicum species have 2 × 12 = 24 chromosomes, while others have 26 chromosomes. Diversity of characteristics Despite the previous description, the Solanaceae exhibit a large morphological variability, even in their reproductive characteristics. Examples of this diversity include: The number of carpels that form the gynoecium In general, the Solanaceae have a gynoecium (the female part of the flower) formed of two carpels. However, Melananthus has a monocarpelar gynoecium, there are three or four carpels in Capsicum, three to five in Nicandra, some species of Jaborosa and Trianaea and four carpels in Iochroma umbellatum. The number of locules in the ovary The number of locules in the ovary is usually the same as the number of carpels. However, some species occur in which the numbers are not the same due to the existence of false septa (internal walls that subdivide each locule), such as in Datura and some members of the Lycieae (the genera Grabowskia and Vassobia). Type of ovules and their number The ovules are generally inverted, folded sharply backwards (anatropous), but some genera have ovules that are rotated at right angles to their stalk (campilotropous) as in Phrodus, Grabowskia or Vassobia), or are partially inverted (hemitropous as in Cestrum, Capsicum, Schizanthus and Lycium). The number of ovules per locule also varies from a few (two pairs in each locule in Grabowskia, one pair in each locule in Lycium) and very occasionally only one ovule is in each locule as for example in Melananthus. The type of fruit The fruits of the great majority of the Solanaceae are berries or capsules (including pyxidia) and less often drupes. Berries are common in the subfamilies Cestroideae, Solanoideae (with the exception of Datura, Oryctus, Grabowskia and the tribe Hyoscyameae) and the tribe Juanulloideae (with the exception of Markea). Capsules are characteristic of the subfamilies Cestroideae (with the exception of Cestrum) and Schizanthoideae, the tribes Salpiglossoideae and Anthocercidoideae, and the genus Datura. The tribe Hyoscyameae has pyxidia. Drupes are typical of the Lycieae tribe and in Iochrominae. Taxonomy The following taxonomic synopsis of the Solanaceae, including subfamilies, tribes and genera, is based on the most recent molecular phylogenetics studies of the family: Subdivision Cestroideae (Browallioideae) This subfamily is characterised by the presence of pericyclic fibres, an androecium with four or five stamens, frequently didynamous. The basic chromosome numbers are highly variable, from x=7 to x=13. The subfamily consists of eight genera (divided into three tribes) and about 195 species distributed throughout the Americas. The genus Cestrum is the most important, as it contains 175 of the 195 species in the subfamily. The Cestreae tribe is unusual because it includes taxa with long chromosomes (from 7.21 to 11.511 μm in length), when the rest of the family generally possesses short chromosomes (for example between 1.5 and 3.52 μm in the Nicotianoideae). Browallieae Hunz. Browallia L., genus with six species distributed throughout the Neotropical realm to Arizona in the United States Streptosolen Miers, monotypic genus native to the Andes Cestreae tribe Don, three genera of woody plants, generally shrubs Cestrum L., some 175 species distributed throughout the Neotropical realm Sessea Ruiz & Pav., 19 species from the Andes Vestia Willd., monotypic genus from Chile Salpiglossideae tribe (Benth.) Hunz. Reyesia Gay, four species, three confined to northern Chile and one in both northern Chile and northern Argentina Salpiglossis Ruiz & Pav., three species, two originating from southern South America and one from Mexico Goetzeoideae This subfamily is characterized by the presence of drupes as fruit and seeds with curved embryos and large fleshy cotyledons. The basic chromosome number is x=13. It includes four genera and five species distributed throughout the Greater Antilles. Some authors suggest their molecular data indicate the monotypic genera Tsoala Bosser & D'Arcy should be included in this subfamily, endemic to Madagascar, and Metternichia to the southeast of Brazil. Goetzeaceae Airy Shaw is considered as a synonym of this subfamily. Coeloneurum Radlk., monotypic genus endemic to Hispaniola Espadaea Rchb., monotypic, from Cuba Goetzea Wydler, includes two species from the Antilles Henoonia Griseb., monotypic, originating in Cuba Nicotianoideae Anthocercideae G.Don: This tribe, endemic to Australia, contains 31 species in seven genera. Molecular phylogenetic studies of the tribe indicate it is the sister of Nicotiana, and the genera Anthocercis, Anthotroche, Grammosolen, and Symonanthus are monophyletic. Some characteristics are also thought to be derived from within the tribe, such as the unilocular stamens with semicircular opercula, bracteolate flowers, and berries as fruit. Anthocercis Labill., 10 species, Australia Anthotroche Endl., four species, Australia Crenidium Haegi, monotypic genus, Australia Cyphanthera Miers, 9 species, Australia Duboisia R.Br., four species, Australia Grammosolen Haegi, two species, Australia Symonanthus Haegi, two species, Australia Nicotianeae tribe Dum. Nicotiana L., genus widely distributed, with 52 American species, 23 Australian, and one African Petunioideae Molecular phylogenetics indicates that Petunioideae is the sister clade of the subfamilies with chromosome number x=12 (Solanoideae and Nicotianoideae). They contain calistegins, alkaloids similar to the tropanes. The androecium is formed of four stamens (rarely five), usually with two different lengths. The basic chromosome number of this subfamily can be x=7, 8, 9 or 11. It consists of 13 genera and some 160 species distributed throughout Central and South America. Molecular data suggest the genera originated in Patagonia. Benthamiella, Combera, and Pantacantha form a clade that can be categorized as a tribe (Benthamielleae) that should be in the subfamily Goetzeoideae. Benthamiella Speg., 12 species native to Patagonia Bouchetia Dunal, three neotropical species Brunfelsia L., around 45 species from the neotropics Calibrachoa Cerv. ex La Llave & Lex., consists of 32 species from the neotropics. The morphological data suggest this genus should be included within the Petunia. However, the molecular and cytogenetic data indicate both should be kept separate. In fact, Calibrachoa has a basic chromosome number x=9, while that of Petunia is x=7 Combera Sandw., two species from Patagonia Fabiana Ruiz & Pav., 15 species native to the Andes Hunzikeria D'Arcy, three species from the southwest United States and Mexico Leptoglossis Benth., seven species from western South America Nierembergia Ruiz & Pav., 21 species from South America Pantacantha Speg., monospecific genus from Patagonia Petunia (Juss.) Wijsman, 18 species from South America Plowmania Hunz. & Subils, monotypic genus from Mexico and Guatemala Schizanthoideae The Schizanthoideae include annual and biennial plants with tropane alkaloids, without pericyclic fibres, with characteristic hair and pollen grains. The flowers are zygomorphic. The androecium has two stamens and three staminodes, anther dehiscence is explosive. In terms of fruit type, the Schizanthoidae retain the plesiomorphic fruit form of the family Solanaceae, capsules, which rely on an anemochorous, abiotic form of dispersal. This is present in Schizanthoidae due both to the genetic constraints of early divergence (see below) as well as Schizanthus evolution and presence in open habitats. The embryo is curved. The basic chromosome number is x=10. Schizanthus is a somewhat atypical genus among the Solanaceae due to its strongly zygomorphic flowers and basic chromosome number. Morphological and molecular data suggest Schizanthus is a sister genus to the other Solanaceae and diverged early from the rest, probably in the late Cretaceous or in the early Cenozoic, 50 million years ago. The great diversity of flower types within Schizanthus has been the product of the species' adaptation to the different types of pollinators that existed in the Mediterranean, high alpine, and desert ecosystems then present in Chile and adjacent areas of Argentina. Schizanthus Ruiz & Pav., 12 species originating from Chile Schwenckioideae Annual plants with pericyclic fibres, their flowers are zygomorphic, the androecium has four didynamous stamens or three staminodes; the embryo is straight and short. The basic chromosome number is x=12. It includes four genera and some 30 species distributed throughout South America. Heteranthia Nees & Mart., one species from Brazil Melananthus Walp., five species from Brazil, Cuba, and Guatemala Protoschwenkia Soler , monotypic genus from Bolivia and Brazil, some molecular phylogenetic studies have suggested this genus has an uncertain taxonomic position within the subfamily Schwenckia L., 22 species distributed throughout the neotropical regions of America Solanoideae Capsiceae Dumort Capsicum L. includes 40 accepted neotropical species Lycianthes (Dunal) Hassler, some 200 species distributed throughout America and Asia Datureae G.Don, two genera are perfectly differentiated at both the morphological and molecular levels, Brugmansia includes tree species, while Datura contains herbs or shrubs, the latter genus can be divided into three sections: Stramonium, Dutra and Ceratocaulis. The monotypic genus Trompettia has recently been created to accommodate the Bolivian shrub formerly known as Iochroma cardenasianum – now known to belong to Datureae and not Physaleae as previously thought. Brugmansia Persoon, six species from the Andes Datura L., 12 neotropical species Trompettia J.Dupin, Single species from Andean Bolivia Hyoscyameae Endl. Anisodus Link, four species from China, India and the Himalayas Atropa L., six species from Europe to North Africa and the Himalayas Atropanthe Pascher, monotypic genus from China Hyoscyamus L., 31 accepted species distributed from the Mediterranean to China Physochlaina G.Don, 6 accepted Euro-Asiatic species Przewalskia Maxim., 2 species from China Scopolia Jacq., disjunct distribution with two European species and two from East Asia Jaboroseae Miers Jaborosa Juss., genus that includes 23 species from South America Solandreae Miers Subtribe Juanulloinae consists 10 genera of trees and epiphytic shrubs with a neotropical distribution . Some of these genera (Dyssochroma, Merinthopodium and Trianaea) show a clear dependency on various species of bats both for pollination and dispersion of seeds. Doselia , four species from Colombia and Ecuador Dyssochroma Miers, two species from the south of Brazil Hawkesiophyton Hunz. two species from South America Juanulloa Ruiz & Pav., 11 species from South and Central America Markea Rich., 9 species from South and Central America Merinthopodium J. Donn. Sm. three species originating from South America Poortmannia Drake, one species, from Colombia, Ecuador and Peru (South America) Schultesianthus Hunz., eight neotropical species Trianaea Planch. & Linden, six South American species Subtribe Solandrinae, a monotypical subtribe, differs from Juanulloinae in that its embryos have incumbent cotyledons and semi-inferior ovaries. Solandra Sw., 10 species from the neotropical regions of America Lycieae Hunz. has a single genus of woody plants, which grow in arid or semiarid climates. The cosmopolitan genus Lycium has much morphological variability. Molecular phylogenetic studies suggest both Grabowskia and Phrodus are included in Lycium, and this genus, along with Nolana and Sclerophylax, form a clade (Lyciina), which currently lacks a taxonomic category. The red fleshy berries dispersed by birds are the main type of fruit in Lycium. The different types of fruit in this genus have evolved from the type of berry just mentioned to a drupe with a reduced number of seeds. Lycium L., 101 cosmopolitan species Mandragoreae (Wettst.) Hunz. & Barboza tribe does not have a defined systematic position according to molecular phylogenetic studies. Mandragora L., two species from Eurasia Nicandreae Wettst. is a tribe with two South American genera. Molecular phylogenetic studies indicate the genera are not interrelated nor are they related with other genera of the family, so their taxonomic position is uncertain. Exodeconus Raf., six species from western South America Nicandra Adans, one species distributed throughout neotropical regions Nolaneae Rchb. are mostly herbs and small shrubs with succulent leaves, they have very beautiful flowers that range from white to various shades of blue, their fruit is schizocarpal, giving rise to various nuts. Nolana L., 89 species distributed throughout western South America Physaleae Miers, is a large tribe that is the sister of Capsiceae. Subtribe Iochrominae (Miers) Hunz., a clade within the Physaleae tribe. contains 37 species, mainly distributed in the Andes, assigned to six genera. The members of this subtribe are characterized by being woody shrubs or small trees with attractive tubular or rotated flowers. They also possess great floral diversity, containing every type is present in the family. Their flowers can be red, orange, yellow, green, blue, purple, or white. The corolla can be tubular to rotated, with a variation of up to eight times in the length of the tube between the various species. Acnistus Schott, one species distributed throughout the neotropics Dunalia Kunth., five species from the Andes Eriolarynx Hunz., three species from Argentina and Bolivia Iochroma Benth., 24 species from the Andes Saracha Ruiz & Pav., eight species from the Andes Vassobia Rusby, two South American species Physalinae (Miers) Hunz. , a monophyletic subtribe, contains 10 genera and includes herbs or woody shrubs with yellow, white, or purple solitary axillary flowers pollinated by bees. Once pollination occurs, the corolla falls and the calyx expands until it entirely covers the boll that is developing (the calyx is called accrescent). In many species, the calyx turns yellow or orange on maturity. The berries contain many greenish to yellow-orange seeds, often with red or purple highlights. Alkekengi Mill., monotypic genus; a Far East species formerly included in genus Physalis (Physalis alkekengi L.) Brachistus Miers, three species from Mexico and Central America Calliphysalis , monotypic genus from the southeastern United States Capsicophysalis , monotypic genus from Mexico and Guatemala Cataracta , monotypic genus from Mexico Chamaesaracha (A.Gray) Benth. & Hook., has 10 species from Mexico and Central America Darcyanthus, genus with just 1 species originating in Bolivia and Peru Leucophysalis Rydberg, includes 3 species from the south west of the United States and Mexico Oryctes S. Watson, monotypic genus from the south west of the United States Physalis L., the largest genus of the subtribe, with 94 species distributed through the tropical regions of the Americas and with 1 species in China Quincula Raf. with just 1 species from the south west of the United States and from Mexico Schraderanthus , monotypic genus from Mexico and Guatemala Trozelia Raf. with 2 species from Ecuador and Peru Tzeltalia, genus segregated from Physalis, with 2 species distributed throughout Mexico and Guatemala Witheringia L' Heritier, genus with 15 species from neotropical regions Subtribe Salpichroinae, this is a subtribe of Physaleae that includes 16 American species distributed in 1 genera: Nectouxia Kunth., monotypic genus that is endemic to Mexico Salpichroa Miers, genus with 15 species from the Andes and other regions of South America Subtribe Withaninae, is a subtribe of Physaleae with a broad distribution, including 9 genera: Athenaea Sendtn., which includes 14 species from South America Cuatresia Hunz., with 11 neotropical species. Molecular studies indicate that this genus, along with Deprea and Larnax has an uncertain taxonomic position Deprea Raf., with 55 neotropical species Discopodium Hochst. with 2 species in tropical Africa Mellissia Hook. f., monotypic genus from Saint Helena with the common name Saint Helena boxwood (genus recently subsumed in Withania) Nothocestrum A.Gray with 4 species from Hawaii Tubocapsicum (Wettst.) Makino, with just one species endemic to China Withania Pauq., with 10 species native to the Canary Islands, Africa and Nepal Tribe Solaneae. The genera Cyphomandra Sendtn., Normania Lowe, Triguera Cav. and Lycopersicum Mill have been transferred to Solanum. The subtribe is therefore composed of two genera: Jaltomata Schltdl., which contains 50 neotropical species Solanum L., the largest genus in the family and one of the broadest of the angiosperms, with 1,328 species distributed across the whole world Incertae sedis The following genera have not yet been placed in any of the recognized subfamilies within the solanaceas (incertae sedis) Atrichodendron , monotypic genus from Vietnam Duckeodendron Kuhlmannb, monotypic genus from the Amazon rainforest Latua , monotypic genus native to Chile Genera and distribution of species The Solanaceae contain 98 genera and some 2,700 species. Despite this immense richness of species, they are not uniformly distributed between the genera. The eight most important genera contain more than 60% of the species, as shown in the table below. Solanum – the genus that typifies the family – includes nearly 50% of the total species of the Solanacea. Etymology The name "Solanaceae" () comes to international scientific vocabulary from Neo-Latin, from Solanum, the type genus, + -aceae, a standardized suffix for plant family names in modern taxonomy. The genus name comes from the Classical Latin word solanum, referring to nightshades (especially Solanum nigrum), "probably from sol, 'sun', + -anum, neuter of -anus." Distribution and habitat Even though members of the Solanaceae are found on all continents except Antarctica, the greatest variety of species are found in Central America and South America. Centers of diversity also occur in Australia and Africa. Solanaceae occupy a great number of different ecosystems, from deserts to rainforests, and are often found in the secondary vegetation that colonizes disturbed areas. In general, plants in this family are of tropical and temperate distribution. Ecology The potato tuber moth (Phthorimaea operculella) is an oligophagous insect that prefers to feed on plants of the family Solanaceae, especially the potato plant (Solanum tuberosum). Female P. operculella use the leaves to lay their eggs and the hatched larvae will eat away at the mesophyll of the leaf. After feeding on the foliage, the larvae will then delve down and feed on the tubers and roots of the plant. Alkaloids Alkaloids are nitrogenous organic substances produced by plants as a secondary metabolite and which have an intense physiological action on animals even at low doses. Solanaceae are known for having a diverse range of alkaloids. To humans, these alkaloids can be desirable, toxic, or both. The tropanes are the most well-known of the alkaloids found in the Solanaceae. The plants that contain these substances have been used for centuries as poisons. However, despite being recognized as poisons, many of these substances have invaluable pharmaceutical properties. Many species contain a variety of alkaloids that can be more or less active or poisonous, such as scopolamine, atropine, hyoscyamine, and nicotine. They are found in plants such as henbane (Hyoscyamus albus), belladonna (Atropa belladonna), jimson weed (Datura stramonium), mandrake (Mandragora autumnalis), tobacco, and others. Some of the main types of alkaloids are: Solanine: A toxic glycoalkaloid with a bitter taste, it has the formula C45H73NO15. It is formed by the alkaloid solanidine with a carbohydrate side chain. It is found in leaves, fruit, and tubers of various Solanaceae such as the potato and tomato. Its production is thought to be an adaptive defence strategy against herbivores. Substance intoxication from solanine is characterized by gastrointestinal disorders (diarrhoea, vomiting, abdominal pain) and neurological disorders (hallucinations and headache). The median lethal dose is between 2 and 5 mg/kg of body weight. Symptoms manifest 8 to 12 hours after ingestion. The amount of these glycoalkaloids in potatoes, for example, varies significantly depending on environmental conditions during their cultivation, the length of storage, and the variety. The average glycoalkaloid concentration is 0.075 mg/g of potato. Solanine has occasionally been responsible for poisonings in people who ate berries from species such as Solanum nigrum or Solanum dulcamara, or green potatoes. Tropanes: The term "tropane" comes from a genus in which they are found, Atropa (the belladonna genus). Atropa is named after the Greek Fate, Atropos, who cut the thread of life. This nomenclature reflects its toxicity and lethality. They are bicyclic organic nitrogen compounds (IUPAC nomenclature: 8-methyl-8-azabicyclo[3.2.1]octane), with the chemical formula of C8H15N. These alkaloids include, among others, atropine, cocaine, scopolamine, and hyoscyamine. They are found in various species, such as mandrake (Mandragora officinarum and M. autumnalis ), black henbane or stinking nightshade (Hyoscyamus niger), belladonna (Atropa belladonna), jimson weed or devil's snare (Datura stramonium) and Brugmansia , as well as many others in the family Solanaceae. Pharmacologically, they are the most powerful known anticholinergics in existence, meaning they inhibit the neurological signals transmitted by the endogenous neurotransmitter, acetylcholine. More commonly, they can halt many types of allergic reactions. Symptoms of overdose may include dry mouth, dilated pupils, ataxia, urinary retention, hallucinations, convulsions, coma, and death. Atropine, a commonly used ophthalmological agent, dilates the pupils and thus facilitates examination of the interior of the eye. In fact, juice from the berries of A. belladonna were used by Italian courtesans during the Renaissance to exaggerate the size of their eyes by causing the dilation of their pupils ("bella donna" means "pretty woman" in Italian). Despite the extreme toxicity of the tropanes, they are useful drugs when administered in extremely small dosages. They can reverse cholinergic poisoning, which can be caused by overexposure to organophosphate insecticides and chemical warfare agents such as sarin and VX. Scopolamine (found in Hyoscyamus muticus and Scopolia carniolica), is used as an antiemetic against motion sickness or for people suffering from nausea as a result of receiving chemotherapy. Scopolamine and hyoscyamine are the most widely used tropane alkaloids in pharmacology and medicine due to their effects on the parasympathetic nervous system. Atropine has a stimulant effect on the central nervous system and heart, whereas scopolamine has a sedative effect. These alkaloids cannot be substituted by any other class of compounds, so they are still in demand. This is one of the reasons for the development of an active field of research into the metabolism of the alkaloids, the enzymes involved, and the genes that produce them. Hyoscyamine 6-β-hydroxylase, for example, catalyses the hydroxylation of hyoscyamine that leads to the production of scopolamine at the end of the tropane's biosynthetic pathway. This enzyme has been isolated and the corresponding gene cloned from three species: H. niger, A. belladonna and B. candida. Nicotine: Nicotine (IUPAC nomenclature (S)-3-(1-methylpyrrolidin-2-yl) pyridine) is a pyrrolidine alkaloid produced in large quantities in the tobacco plant (Nicotiana tabacum). Edible Solanaceae such as eggplants, tomatoes, potatoes, and peppers also contain nicotine, but at concentrations 100,000 to 1,000,000 times less than tobacco. Nicotine's function in a plant is to act as a defense against herbivores, as it is a very effective neurotoxin, in particular against insects. In fact, nicotine has been used for many years as an insecticide, though its use is currently being replaced by synthetic molecules derived from its structure. At low concentrations, nicotine acts as a stimulant in mammals, which causes the dependency in smokers. Like the tropanes, it acts on cholinergic neurons, but with the opposite effect (it is an agonist as opposed to an antagonist). It has a higher specificity for nicotinic acetylcholine receptors than other ACh proteins. Capsaicin: Capsaicin (IUPAC nomenclature 8-methyl-N-vanillyl-trans-6-nonenamide) is structurally different from nicotine and the tropanes. It is found in species of the genus Capsicum, which includes chilis and habaneros and it is the active ingredient that determines the Scoville rating of these spices. The compound is not noticeably toxic to humans. However, it stimulates specific pain receptors in the majority of mammals, specifically those related to the perception of heat in the oral mucosa and other epithelial tissues. When capsaicin comes into contact with these mucosae, it causes a burning sensation little different from a burn caused by fire. Capsaicin affects only mammals, not birds. Pepper seeds can survive the digestive tracts of birds; their fruit becomes brightly coloured once its seeds are mature enough to germinate, thereby attracting the attention of birds that then distribute the seeds. Capsaicin extract is used to make pepper spray, a useful deterrent against aggressive mammals. Economic importance The family Solanaceae contains such important food species as the potato (Solanum tuberosum), the tomato (Solanum lycopersicum), the pepper (Capsicum annuum) and the aubergine or eggplant (Solanum melongena). Nicotiana tabacum, originally from South America, is now cultivated throughout the world to produce tobacco. Many solanaceas are important weeds in various parts of the world. Their importance lies in the fact that they can host pathogens or diseases of the cultivated plants, therefore their presence increases the loss of yield or the quality of the harvested product. An example of this can be seen with Acnistus arborescens and Browalia americana that host thrips, which cause damage to associated cultivated plants, and certain species of Datura that play host to various types of virus that are later transmitted to cultivated solanaceas. Some species of weeds such as, Solanum mauritianum in South Africa represent such serious ecological and economic problems that studies are being carried out with the objective of developing a biological control through the use of insects. A wide variety of plant species and their cultivars belonging to the Solanaceae are grown as ornamental trees, shrubs, annuals and herbaceous perennials Examples include Brugmansia × candida ("angel's trumpet") grown for its large pendulous trumpet-shaped flowers, or Brunfelsia latifolia, whose flowers are very fragrant and change colour from violet to white over a period of 3 days. Other shrub species that are grown for their attractive flowers are Lycianthes rantonnetii (Blue Potato Bush or Paraguay Nightshade) with violet-blue flowers and Nicotiana glauca ("Tree Tobacco") Other solanaceous species and genera that are grown as ornamentals are the petunia (Petunia × hybrida), Lycium, Solanum, Cestrum, Calibrachoa × hybrida and Solandra. There is even a hybrid between Petunia and Calibrachoa (which constitutes a new nothogenus called × Petchoa G. Boker & J. Shaw) that is being sold as an ornamental. Many other species, in particular those that produce alkaloids, are used in pharmacology and medicine (Nicotiana, Hyoscyamus, and Datura). Solanaceae and the genome Many of the species belonging to this family, among them tobacco and the tomato, are model organisms that are used for research into fundamental biological questions. One of the aspects of the solanaceas' genomics is an international project that is trying to understand how the same collection of genes and proteins can give rise to a group of organisms that are so morphologically and ecologically different. The first objective of this project was to sequence the genome of the tomato. In order to achieve this each of the 12 chromosomes of the tomato's haploid genome was assigned to different sequencing centres in different countries. So chromosomes 1 and 10 were sequenced in the United States, 3 and 11 in China, 2 in Korea, 4 in Britain, 5 in India, 7 in France, 8 in Japan, 9 in Spain and 12 in Italy. The sequencing of the mitochondrial genome was carried out in Argentina and the chloroplast genome was sequenced in the European Union.
Biology and health sciences
Solanales
null
47862531
https://en.wikipedia.org/wiki/Cayenne%20pepper
Cayenne pepper
The cayenne pepper is a type of Capsicum annuum. It is usually a hot chili pepper used to flavor dishes. Cayenne peppers are a group of tapering, 10 to 25 cm long, generally skinny, mostly red-colored peppers, often with a curved tip and somewhat rippled skin, which hang from the bush as opposed to growing upright. Most varieties are generally rated at 30,000 to 50,000 Scoville units. The fruits are generally dried and ground to make the powdered spice of the same name. However, cayenne powder may be a blend of different types of peppers, quite often not containing cayenne peppers, and may or may not contain the seeds. Cayenne is used in cooking spicy dishes either as a powder or in its whole form. It is also used as an herbal supplement. Etymology The word cayenne is thought to be a corruption of the word kyynha, meaning "capsicum" in the Old Tupi language once spoken in Brazil. The town Cayenne in French Guiana is related to the name, and may have been named for the pepper or the Cayenne River. English botanist Nicholas Culpeper used the phrase "cayenne pepper" in 1652, while the city was only renamed as Cayenne in 1777. Taxonomy The cayenne pepper is a type of Capsicum annuum, as are bell peppers, jalapeños, pimientos, and many others. The genus Capsicum is in the nightshade family, (Solanaceae). Cayenne peppers are often said to belong to the frutescens variety, but frutescens peppers are now defined as peppers which have fruit which grow upright on the bush (such as tabasco peppers), thus what is known in English as cayenne peppers are by definition not frutescens. Culpeper, in his Complete Herbal from 1653, mentions cayenne pepper as a synonym for what he calls "pepper (guinea)". By the end of the 19th century "Guinea pepper" had come to mean bird's eye chili or piri-piri, although he refers to Capsicum peppers in general in his entry. In the 19th century, modern cayenne peppers were classified as C. longum, this name was later synonymised with C. frutescens. Cayenne powder, however, has generally been made from the bird's eye peppers, in the 19th century classified as C. minimum. Varieties Cayenne peppers are long, tapering, long, generally skinny, mostly red-colored peppers, often with a curved tip and somewhat rippled skin, which hang from the bush as opposed to growing upright. There are many specific cultivars, such as Cow-horn, Cayenne Sweet, Cayenne Buist's Yellow, Golden Cayenne, Cayenne Carolina, Cayenne Indonesian, Joe's Long, Cayenne Large Red Thick, Cayenne Long Thick Red, Ring of Fire, Cayenne Passion, Cayenne Thomas Jefferson, Cayenne Iberian, Cayenne Turkish, Egyptian Cayenne, Cayenne Violet or Numex Las Cruces Cayenne. Although most modern cayenne peppers are colored red, yellow and purple varieties exist, and in the 19th century yellow varieties were common. Most types are hot, although a number of mild variants exist. Most varieties are generally rated at 30,000 to 50,000 Scoville units, although some are rated at 20,000 or less. In cuisine Cayenne powder may be a blend of different types of chili peppers. It is used in its fresh form, or as dried powder on seafood, all types of egg dishes (devilled eggs, omelettes, soufflés), meats and stews, casseroles, cheese dishes, hot sauces, and curries. In North America, the primary cultivar in crushed red pepper is cayenne. It is also used in some varieties of hot sauce in North America, such as Frank's RedHot, Texas Pete and Crystal.
Biology and health sciences
Botanical fruits used as culinary vegetables
Plants
47862537
https://en.wikipedia.org/wiki/Habanero
Habanero
The habanero (; ) is a pungent cultivar of Capsicum chinense chili pepper. Unripe habaneros are green, and they color as they mature. The most common color variants are orange and red, but the fruit may also be white, brown, yellow, green, or purple. Typically, a ripe habanero is long. Habanero chilis are very hot, rated 100,000–350,000 on the Scoville scale. The habanero heat, flavor, and floral aroma make it a common ingredient in hot sauces and other spicy foods. Name The habanero is named after the Cuban city of La Habana, known in English as Havana, because it used to feature heavily in trading there. (Despite the name, habaneros and other spicy-hot ingredients are rarely used in traditional Cuban cooking.) In English, it is sometimes incorrectly spelled habañero and pronounced , the tilde being added as a hyperforeignism patterned after jalapeño. Origin and use The habanero chili comes from the Amazon, from which it was spread, reaching Mexico. Today, the largest producer of the habanero pepper is the Yucatán Peninsula, in Mexico. Habaneros are an integral part of Yucatecan food, accompanying most dishes, either in natural form or purée or salsa. Other modern producers include Belize, Panama, Costa Rica, Colombia, Ecuador, and parts of the United States, including Texas, Idaho, and California. The habanero chili was disseminated by Spanish colonists to other areas of the world, to the point that 18th-century taxonomists mistook China for its place of origin and called it Capsicum chinense ("the Chinese pepper"). The Scotch bonnet is often compared to the habanero, since they are two varieties of the same species, but they have different pod types. Both the Scotch bonnet and the habanero have thin, waxy flesh. They have a similar heat level and flavor. Both varieties average around the same level of pungency, but the actual degree varies greatly from one fruit to another according to genetics, growing methods, climate, and plant stress. In 1999, the habanero was listed by Guinness World Records as the world's hottest chili, but it has since been displaced by other peppers. The heat of the habanero does not immediately take effect, but sets in over a period of a few minutes and lasts up to an hour in the mouth. The heat can sometimes be felt in the esophagus some hours after consumption. The Trinidad moruga scorpion has since been identified as a native Capsicum chinense subspecies even hotter than the habanero. Breeders constantly crossbreed subspecies to attempt to create cultivars that will break the record on the Scoville scale. One example is the Carolina Reaper, supposedly a cross between a bhut jolokia pepper with a particularly pungent red habanero. Cultivation Habaneros thrive in hot weather. Like all peppers, the habanero does well in an area with good morning sun and in soil with a pH level around 5 to 6 (slightly acidic). Habaneros which are watered daily produce more vegetative growth but the same number of fruit, with lower concentrations of capsaicin, as compared to plants which are watered only when dry (every seven days). Overly moist soil and roots will produce bitter-tasting peppers. Daily watering during flowering and early setting of fruit helps prevent flower and immature fruit from dropping, but flower dropping rates often reach 90% even in ideal conditions. The habanero is a perennial flowering plant, meaning that with proper care and growing conditions, it can produce flowers (and thus fruit) for many years. Habanero bushes are good candidates for a container garden. In temperate climates, though, it is treated as an annual, dying each winter and being replaced the next spring. In tropical and subtropical regions, the habanero, like other chiles, will produce year round. As long as conditions are favorable, the plant will set fruit continuously. Cultivars Several growers have attempted to selectively breed habanero plants to produce hotter, heavier, and larger peppers. Most habaneros rate between 200,000 and 300,000 on the Scoville scale. In 2004, researchers in Texas created a mild version of the habanero, but retained the traditional aroma and flavor. The milder version was obtained by crossing the Yucatán habanero pepper with a heatless habanero from Bolivia over several generations. Breeder Michael Mazourek used a mutation discovered by the Chile Pepper Institute to create a heatless version labeled the 'Habanada' bred in 2007 and released in 2014. Black habanero is an alternative name often used to describe the dark brown variety of habanero chilis, which are slightly smaller and more spherical. Some seeds have been found which are thought to be over 7,000 years old. The black habanero has an exotic and unusual taste, and is hotter than a regular habanero with a rating between 425,000 and 577,000 Scoville units. Small slivers used in cooking can have a dramatic effect on the overall dish. Black habaneros take considerably longer to grow than other habanero chili varieties. Caribbean Red, a cultivar within the habanero family, has a citrusy and slightly smoky flavor, with a Scoville rating ranging from 300,000 to 445,000 Scoville units. Gallery
Biology and health sciences
Botanical fruits used as culinary vegetables
Plants
47863693
https://en.wikipedia.org/wiki/Pigeon%20pea
Pigeon pea
The pigeon pea (Cajanus cajan) or toor dal is a perennial legume from the family Fabaceae native to the Eastern Hemisphere. The pigeon pea is widely cultivated in tropical and semitropical regions around the world, being commonly consumed in South Asia, Southeast Asia, Africa, Latin America and the Caribbean. Etymology and other names Scientific epithet The scientific name for the genus Cajanus and the species cajan derive from the Malay word katjang (modern spelling: kacang) meaning legume in reference to the bean of the plant. Common English names In English they are commonly referred to as pigeon pea which originates from the historical utilization of the pulse as pigeon fodder in Barbados. The term Congo pea and Angola pea developed due to the presence of its cultivation in Africa and the association of its utilization with those of African descent. The names no-eye pea and red gram both refer to the characteristics of the seed, with no-eye pea in reference to the lack of a hilum blotch on most varieties, unlike the black-eyed pea, and red gram in reference to the red color of most Indian varieties and gram simply referring to the plant being a legume. Internationally Africa In Benin the pigeon pea is locally known as klouékoun in Fon, otinin in Ede and eklui in Adja. In Cape Verde they are called Fixon Kongu in Cape Verdean creole. In Comoros and Mauritius they are known as embrevade or ambrebdade in Comorian and Morisyen, respectively, in return originating from the Malagasy term for the plant amberivatry. In Ghana they are known as aduwa or adowa in Dagbani. In Kenya and Tanzania they are known as mbaazi in Swahili. In Malawi they are called nandolo in Chichewa. In Nigeria pigeon peas are called fiofio or mgbụmgbụ in Igbo, waken-masar "Egyptian bean" or waken-turawa "foreigner bean" in Hausa, and òtílí in Yoruba. In Sudan they are known as adaseya, adasy and adasia. Asia In India the plant is known by various different names such as Assamese: ৰহৰ মাহ (rohor mah), মিৰি মাহ (miri-mah) Bengali: অড়হর (arahar) Gujarati: તુવેર (tuver) Hindi: अरहर (arhar), तुवर (tuvar) Kannada: ತೊಗರಿ ಬೆಳೆ (togari bele), ತೊಗರಿ ಕಾಳು (togari kalu) Konkani: तोरी (tori) Malayalam: ആഢകി (adhaki), തുവര (tuvara) Manipuri: মাইৰোংবী (mairongbi) Marathi: तूर (tur) Nepali: रहर (rahar) Oriya: ହରଡ଼ (harada), କାକ୍ଷୀ (kakhyi), ତୁବର (tubara) Punjabi: ਦਿੰਗੇਰ (dinger) Tamil: ஆடகி (adhaki), இருப்புலி (iruppuli), காய்ச்சி (kaycci) and துவரை (tuvarai) Telugu: కంది (kandi), Tibetan: tu ba ri Urdu: ارهر (arhar), توأر (tuar). In Persian,it is known as شاخول (shakhul) and is popular in dishes. In the Philippines they are known as Kadios in Filipino and Kadyos in Tagalog. The Americas In Latin America, they are known as guandul or gandul in Spanish, and feijão andu or gandu in Portuguese all of which derive from Kikongo wandu or from Kimbundu oanda; both names referring to the same plant. In the Anglophone regions of the Caribbean, like Jamaica, they are known as Gungo peas, coming from the more archaic English name for the plant congo pea, given to the plant because of its popularity and relation to Sub-Saharan Africa. In Francophone regions of the Caribbean they are known as pois d' angole, pwa di bwa in Antillean creole and pwa kongo in Haitian creole. In Suriname they are known as wandoe or gele pesi, the former of which is derived from the same source as its Spanish and Portuguese counterparts, the latter of which literally translates to 'yellow pea' from Dutch and Sranan Tongo. Oceania In Hawaii they are known as pi pokoliko 'Puerto Rican pea' or pi nunu 'pigeon pea' in the Hawaiian language. History and origin Origin The closest relatives to the cultivated pigeon pea are Cajanus cajanifolia, Cajanus scarabaeoides and Cajanus kerstingii, native to India and the latter West Africa respectively. Much debate exist over the geographical origin of the species, with some groups claiming origin from the Nile river and Western Africa, and the other Indian origin. The two epicenters of genetic diversity exist in both Africa and India, but India is considered to be its primary center of origin with West Africa being considered a second major center of origin. History By at least 2,800 BCE in peninsular India, where its presumptive closest wild relatives Cajanus cajanifolia occurs in tropical deciduous woodlands, its cultivation has been documented. Archaeological finds of pigeon pea cultivation dating to about 14th century BC have also been found at the Neolithic site of Sanganakallu in Bellary and its border area Tuljapur (where the cultivation of African domesticated plants like pearl millet, finger millet, and Lablab have also been uncovered), as well as in Gopalpur and other South Indian states. From India it may have made its way to North-East Africa via Trans-Oceanic Bronze Age trade that allowed cross-cultural exchange of resources and agricultural products. The earliest evidence of pigeon peas in Africa was found in Ancient Egypt with the presence of seeds in Egyptian tombs dating back to around 2,200 BCE. From eastern Africa, cultivation spread further west and south through the continent, where by means of the Trans-Atlantic slave trade, it reached the Americas around the 17th century. Pigeon peas were introduced to Hawaii in 1824 by James Macrae with a few specimens becoming naturalized on the islands, but they wouldn't gain much popularity until later. By the early 20th century Filipinos and Puerto Ricans began to emigrate from the American Philippines and Puerto Rico to Hawaii to work in sugarcane plantations in 1906 and 1901, respectively. Pigeon peas are said to have been popularized on the island by the Puerto Rican community where by the First World War their cultivation began, to expand on the island where they are still cultivated and consumed by locals. Nutrition Pigeon peas contain high levels of protein and the important amino acids methionine, lysine, and tryptophan. The following table indicates completeness of nutritional profile of various amino acids within mature seeds of pigeon pea. Methionine + Cystine combination is the only limiting amino acid combination in pigeon pea. In contrast to the mature seeds, the immature seeds are generally lower in all nutritional values, however they contain a significant amount of vitamin C (39 mg per 100 g serving) and have a slightly higher fat content. Research has shown that the protein content of the immature seeds is of a higher quality. Cultivation Pigeon peas can be of a perennial variety, in which the crop can last three to five years (although the seed yield drops considerably after the first two years), or an annual variety more suitable for seed production. Global production World production of pigeon peas is estimated at 4.49 million tons. About 63% of this production comes from India. The total number of hectares grown to pigeon pea is estimated at 5.4 million. India accounts for 72% of the area grown to pigeon pea or 3.9 million hectares. Africa is the secondary centre of diversity and at present it contributes about 21% of global production with 1.05 million tons. Malawi, Tanzania, Kenya, Mozambique and Uganda are the major producers in Africa. The pigeon pea is an important legume crop of rainfed agriculture in the semiarid tropics. The Indian subcontinent, Africa and Central America, in that order, are the world's three main pigeon pea-producing regions. Pigeon peas are cultivated in more than 25 tropical and subtropical countries, either as a sole crop or intermixed with cereals, such as sorghum (Sorghum bicolor), pearl millet (Pennisetum glaucum), or maize (Zea mays), or with other legumes, such as peanuts (Arachis hypogea). Being a legume capable of symbiosis with Rhizobia, the bacteria associated with the pigeon pea enrich soils through symbiotic nitrogen fixation. The crop is cultivated on marginal land by resource-poor farmers, who commonly grow traditional medium- and long-duration (5–11 months) landraces. Short-duration pigeon peas (3–4 months) suitable for multiple cropping have recently been developed. Traditionally, the use of such input as fertilizers, weeding, irrigation, and pesticides is minimal, so present yield levels are low (average = ). Greater attention is now being given to managing the crop because it is in high demand at remunerative prices. Pigeon peas are very drought-resistant and can be grown in areas with less than 650 mm annual rainfall. With the maize crop failing three out of five years in drought-prone areas of Kenya, a consortium led by the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) aimed to promote the pigeon pea as a drought-resistant, nutritious alternative crop. Nitrogen Fixation Legumes, which provide highly nutritious products and contribute to soil fertility through biological nitrogen fixation, are one of the most important crops in mixed crop-livestock systems. Cajanus cajan is an important legume crop with a high N-fixation ability (79 % N derived from the atmosphere). Plant-growth promoting rhizobacteria (PGPR), together with strains of Rhizobium, can enhance growth and nitrogen fixation in pigeon pea by colonizing thenselves in the plant nodules. These bioinoculants can be added as a single species but also as combined communities. Using a single bioinoculant shows benefits, but mixed communitites of different bioinoculatns have a greater positive impact on nodulation, plant dry mass, as well as shoot and root length. These different community species have different functions for the pigeon pea: Pests and diseases Pigeon pea is affected by a variety of pests and insects that can significantly impact crop yield and quality. They can infest the plant from seedling stage till harvest, therefore pests and diseas are the primary cause for low yields. The major pests are moths include the gram pod borer (Helicoverpa armigera), which causes defoliation and pod damage; the blue butterfly (Lampides boeticus), which infests buds, flowers, and young pods; and the spotted pod borer (Maruca vitrata), known for webbing together infested pods and flowers. The tur pod bug (Clavigralla gibbosa) is another significant pest of pigeon pea, causing substantial damage to pods and seeds. Current resistance efforts focus on breeding pigeon pea varieties with enhanced resistance to these pests. However, the presence of multiple pest species and the variability in pest pressure across regions pose challenges to achieving consistent resistance. Effective management techniques include integrated pest management (IPM) strategies such as crop rotation, intercropping with non-host plants, timely sowing, and the use of biological control agents like parasitoids and predators. Chemical control measures, including the application of insecticides like neem-based products and synthetic pyrethroids, are also employed when necessary. Common Diseases of Pigeon Pea: Fusarium Wilt (Fusarium udum) Dry Root Rot (Macrophomina phaseolina) Phytophthora blight (Phytophthora drechsleri) Alternaria Leaf Spot (Alternaria alternata) Powdery Mildew (Leveillula taurica) Sterility Mosaic Disease (Pigeon pea sterility mosaic virus) Yellow Mosaic Virus (Mungbean yellow mosaic virus) Breeding Pigeonpea is unique among legumes in that its flowers support both cross-pollination and self-pollination. The bright, nectar-rich flowers attract pollinating insects, allowing natural outcrossing, which averages about 20% but varies with location due to pollinator populations. This level of outcrossing can lead to genetic contamination of parental lines and complicate the selection of lines by reducing the homozygosity of progeny. To mitigate these effects, breeders use techniques such as enclosing flowers in muslin bags or nets to prevent insect pollination. However, natural outcrossing also results in genetically diverse landraces and requires two to three generations of selfing before parental lines can be used in hybridisation programmes. Over 50 years of pigeonpea breeding has resulted in genetic improvements, disease-resistant varieties, a reduction in crop maturity from 300 to less than 90 days, and the introduction of the first legume hybrid technology, which has increased yields by 30-50%. Despite these advances, yield per unit area has remained stable, with improved stability and diversification for farmers. John Spence, a botanist and politician from Trinidad and Tobago, developed several varieties of dwarf pigeon peas which can be harvested by machine, instead of by hand. Genome sequence The pigeon pea is the first seed legume plant to have its complete genome sequenced. The sequencing was first accomplished by a group of 31 Indian scientists from the Indian Council of Agricultural Research. It was then followed by a global research partnership, the International Initiative for Pigeon pea Genomics (IIPG), led by ICRISAT with partners such as BGI–Shenzhen (China), US research laboratories like University of Georgia, University of California-Davis, Cold Spring Harbor Laboratory, and National Centre for Genome Resources, European research institutes like the National University of Ireland Galway. It also received support from the CGIAR Generation Challenge Program, US National Science Foundation and in-kind contribution from the collaborating research institutes. It is the first time that a CGIAR-supported research center such as ICRISAT led the genome sequencing of a food crop. There was a controversy over this as CGIAR did not partner with a national team of scientists and broke away from the Indo American Knowledge Initiative to start their own sequencing in parallel. The 616 mature microRNAs and 3919 long non-codingRNAs sequences were identified in the genome of pigeon pea. Dehulling Various methodologies exist in order to remove the pulse from its shell. In earlier days hand pounding was common. Several traditional methods are used that can be broadly classified under two categories: the wet method and the dry method. The Wet method Involves water soaking, sun drying and dehulling. The Dry method Involves oil/water application, drying in the sun, and dehulling. Depending on the magnitude of operation, large-scale commercial dehulling of large quantities of pigeon pea into its deskinned, split version, known as toor dal in Hindi, is done in mechanically operated mills. Uses Culinary use Pigeon peas are both a food crop (dried peas, flour, or green vegetable peas) and a forage/cover crop. In combination with cereals, pigeon peas make a well-balanced meal and hence are favored by nutritionists as an essential ingredient for balanced diets. The dried peas may be sprouted briefly, then cooked, for a flavor different from the green or dried peas. Sprouting also enhances the digestibility of dried pigeon peas via the reduction of indigestible sugars that would otherwise remain in the cooked dried peas. Africa In Cape Verde they make a soup with the dried pigeon peas called feijão Congo, after its own name, made with dried pigeon peas in a similar manner to Brazilian feijoada. In Kenya and throughout the Swahili-speaking region of East Africa, pigeon peas are utilized in dishes such as , that is usually served for breakfast. In the Enugu state of Nigeria, and igbo dish called Ẹchịcha or Achịcha is made with palm oil, cocoyam, and seasoning. It is also similar to other dishes from the state such as ayarya ji and fio-fio. In Ethiopia, the pods, the young shoots and leaves, are cooked and eaten. Asia In India, it is one of the most popular pulses, being an important source of protein in a mostly vegetarian diet. It is the primary accompaniment to rice or roti and has the status of staple food throughout the length and breadth of India. In regions where it grows, fresh young pods are eaten as a vegetable in dishes such as sambar. In the Western Visayas region of the Philippines, pigeon peas are the main ingredient of a very popular dish called "KBL" - an acronym for "Kadyos" (pigeon pea), "Baboy" (pork), and "Langka" (jackfruit). It is a savory soup with rich flavors coming from the pigeon peas, smoked pork preferably the legs or tail, and souring agent called batuan. Raw jackfruit meat is chopped and boiled to soft consistency, and serves as an extender. The violet color of the soup comes from the pigment of the variety commonly grown in the region. The Americas In the Caribbean coast of Colombia, such as the Atlántico department of Colombia, the sopa de guandú con carne salada (or simply "gandules") is made with pigeon peas, yam, plantain, yuca, and spices. During the week of Semana santa a sweet is made out of pigeon peas called dulce de guandules which is made by mashed and sweetened pigeon peas with origins in the maroon community of San Basilio de Palenque. In the Dominican Republic, a dish made of rice and green pigeon peas called moro de guandules is a traditional holiday food. It is also consumed as guandules guisados, which is a savoury stew with coconut and squash served with white rice. A variety of sancocho is also made based on green pigeon peas that includes poultry, pork, beef, yams, yucca, squash, plantain and others. Dominicans have a high regard for this legume and it is consumed widely. In Panama, pigeon peas are used in a dish called Arroz con guandú y coco or "rice with pigeon peas and coconut" traditionally prepared and consumed during the end of year holidays. In Puerto Rico, arroz con gandules is made with rice and pigeon peas and sofrito which is a traditional dish, especially during Christmas season. Pigeon peas can also be made in to a stew called asopao de gandules, with plantain balls. Jamaica also uses pigeon peas instead of kidney beans in their rice and peas dish, especially during the Christmas season. Trinidad and Tobago and Grenada have their own variant, called pelau, which includes either beef or chicken, and occasionally pumpkin and pieces of cured pig tail. Unlike in some other parts of the Greater Caribbean, in The Bahamas pigeon peas are used in dried form, light brown in color to make the heartier, heavier, signature Bahamian staple dish "Peas 'n Rice." Oceania In Hawaii they are used to make a dish called gandule rice, also called godule rice, gundule rice, and ganduddy rice originates on the island from the Puerto Rican community with historic ties to the island and is prepared in a similar manner to that of traditional Puerto Rican arroz con gandules. Other uses Agricultural It is an important ingredient of animal feed used in West Africa, especially in Nigeria, where it is also grown. Leaves, pods, seeds and the residues of seed processing are used to feed all kinds of livestock. In the Congo pigeon peas are utilized as one of the main food forest and soil improvement crops after using a slash-and-burn fire technique called maala. Pigeon peas are in some areas an important crop for green manure, providing up to 90 kg nitrogen per hectare. The woody stems of pigeon peas can also be used as firewood, fencing, thatch and as a source for rope fiber. Medicinal Pigeon pea has been valued for its medicinal properties since prehistoric times in various regions, including Africa, Egypt and Asia. Today, different countries use different parts of the plant to treat a range of diseases as an alternative medicine. In the Republic of Congo the Kongo, Lari, and Dondo people use the sap of the leaves as an eyedrop for epilepsy. In Nigeria the leaves are used to treat malaria, while in India they are used to treat diabetes, stomach tumours and wounds. In Oman, pigeon pea is used to treat chronic diseases, and in traditional Chinese medicine it is used to relieve pain and control intestinal worms. In Africa, the seeds are used to treat hepatitis and measles. The widespread traditional medicinal use of the plant is attributed to its rich content of phenolic compounds, which have antiviral, anti-inflammatory, antioxidant, hypocholesterolemic and hypoglycaemic effects. The leaves also contain flavonoids, terpenoids, essential oils and coumarin, which further enhance its therapeutic potential in the fight against disease. There are different studies looking at how the medicinal compounds of pigeon pea could be used in future. One study, using rats, found that a pigeonpea beverage could be used as an anti-diabetic functional drink. This drink would help to reduce plasma glucose and total cholesterol levels and increase plasma antioxidant status. Therefore, it could be used in future as an alternative strategy to maintain plasma glucose and cholesterol at normal levels and help prevent diabetes complications. Furthermore, pigeon pea could be used as a fermented food as this would increase its antioxidant levels and therefore, have an antiatherosclerotic effect. This would help to improve systolic blood pressure as well as diastolic blood pressure. This benefits cardiovascular health and could be developed as a new dietary supplement or functional food that prevents hypertension. In Madagascar the branches have been used as a teeth cleaning twig.
Biology and health sciences
Pulses
Plants
47863730
https://en.wikipedia.org/wiki/Cumin
Cumin
Cumin (, ; ; Cuminum cyminum) is a flowering plant in the family Apiaceae, native to the Irano-Turanian Region. Its seeds – each one contained within a fruit, which is dried – are used in the cuisines of many cultures in both whole and ground form. Although cumin is used in traditional medicine, there is no high-quality evidence that it is safe or effective as a therapeutic agent. Etymology and pronunciation The term comes via Middle English comyn, from Old English cymen (which is cognate with Old High German kumin) and Old French cummin, both from the Latin term . This in turn comes from the Ancient Greek (), a Semitic borrowing related to Hebrew () and Arabic (). All of these ultimately derive from Akkadian (). The English word is traditionally pronounced (), like "coming" with an ⟨n⟩ instead of ⟨ng⟩ (/ŋ/). American lexicographer Grant Barrett notes that this pronunciation now is rarely used, replaced in the late 20th century by hyperforeignized () and (). Description Cumin is the dried seed of the herb Cuminum cyminum, a member of the parsley family. The cumin plant grows to tall and is harvested by hand. It is an annual herbaceous plant, with a slender, glabrous, branched stem that is tall and has a diameter of 3–5 cm (–2 in). Each branch has two to three sub-branches. All the branches attain the same height, so the plant has a uniform canopy. The stem is colored grey or dark green. The leaves are long, pinnate or bipinnate, with thread-like leaflets. The flowers are small, white or pink, and borne in umbels. Each umbel has five to seven umbellets. The fruit is a lateral fusiform or ovoid achene 4–5 mm (– in) long, containing two mericarps with a single seed. Cumin seeds have eight ridges with oil canals. They resemble caraway seeds, being oblong in shape, longitudinally ridged, and yellow-brown in color, like other members of the family Apiaceae (Umbelliferae) such as caraway, parsley, and dill. Confusion with other spices Cumin is sometimes confused with caraway (Carum carvi), another spice in the parsley family (Apiaceae). Many European and Asian languages do not distinguish clearly between the two; for example, in Indonesia both are called . Many Slavic and Uralic languages refer to cumin as "Roman caraway" or "spice caraway". The distantly related Bunium persicum and Bunium bulbocastanum and the unrelated Nigella sativa are both sometimes called black cumin (q.v.). History Likely originating in Central Asia, Southwestern Asia, or the Eastern Mediterranean, cumin has been in use as a spice for thousands of years. Seeds of wild cumin were excavated in the now-submerged settlement of Atlit-Yam, dated to the early 6th millennium BC. Seeds excavated in Syria were dated to the second millennium BC. They have also been reported from several New Kingdom levels of ancient Egyptian archaeological sites. In the ancient Egyptian civilization, cumin was used as a spice and as a preservative in mummification. Cumin was a significant spice for the Minoans in ancient Crete. Ideograms for cumin appear in Linear A archive tablets documenting Minoan palace stores during the Late Minoan period. The ancient Greeks kept cumin at the dining table in its own container (much as pepper is frequently kept today), and this practice continues in Morocco. Cumin was also used heavily in ancient Roman cuisine. In India, it has been used for millennia as a traditional ingredient in innumerable recipes, and forms the basis of many other spice blends. Cumin was introduced to the Americas by Spanish and Portuguese colonists. Black and green cumin are used in Persian cuisine. Today, the plant is mostly grown in the Indian subcontinent, Northern Africa, Mexico, Chile, and China. Since cumin is often used as part of bird food and exported to many countries, the plant can occur as an introduced species in many territories. Cultivation and production Cultivation areas India is the world's largest producer of cumin, accounting for about 70%. The other major cumin-producing countries are Syria (13%), Turkey (5%), UAE (3%), and Iran. India produced 856,000 tons of cumin seed in the 2020–2021 fiscal year. Climatic requirements Cumin is a drought-tolerant tropical or subtropical crop. It is vulnerable to frost and has a growth season of 120 frost-free days. The optimum growth temperature ranges are between . The Mediterranean climate is most suitable for its growth. Cultivation of cumin requires a long, hot summer of three to four months. At low temperatures, the leaf color changes from green to purple. High temperatures might reduce growth period and induce early ripening. In India, cumin is sown from October until the beginning of December, and harvesting starts in February. In Syria and Iran, cumin is sown from mid-November until mid-December (extensions up to mid-January are possible) and harvested in June/July. Grading The three noteworthy sorts of cumin seeds in the market vary in seed shading, amount of oil, and flavor. Iranian Indian, South Asian Middle Eastern Cultivation parameters Cumin is grown from seeds. The seeds need for emergence, an optimum of is suggested. Cumin is vulnerable to frost damage, especially at flowering and early seed formation stages. Methods to reduce frost damage are spraying with sulfuric acid (0.1%), irrigating the crop prior to frost incidence, setting up windbreaks, or creating an early-morning smoke cover. The seedlings of cumin are rather small and their vigor is low. Soaking the seeds for 8 hours before sowing enhances germination. For an optimal plant population, a sowing density of is recommended. Fertile, sandy, loamy soils with good aeration, proper drainage, and high oxygen availability are preferred. The pH optimum of the soil ranges from 6.8 to 8.3. Cumin seedlings are sensitive to salinity and emergence from heavy soils is rather difficult. Therefore, a proper seedbed preparation (smooth bed) is crucial for the optimal establishment of cumin. Two sowing methods are used for cumin, broadcasting and line sowing. For broadcast sowing, the field is divided into beds and the seeds are uniformly broadcast in this bed. Afterwards, they are covered with soil using a rake. For line sowing, shallow furrows are prepared with hooks at a distance of . The seeds are then placed in these furrows and covered with soil. Line sowing offers advantages for intercultural operations such as weeding, hoeing, or spraying. The recommended sowing depth is 1–2 cm and the recommended sowing density is around 120 plants per m2. The water requirements of cumin are lower than those of many other species. Despite this, cumin is often irrigated after sowing to be sure that enough moisture is available for seedling development. The amount and frequency of irrigation depends on the climate conditions. Cultivation management The relative humidity in the center of origin of cumin is rather low. High relative humidity (i.e. wet years) favors fungal diseases. Cumin is especially sensitive to Alternaria blight and Fusarium wilt. Early-sown crops exhibit stronger disease effects than late-sown crops. The most important disease is Fusarium wilt, resulting in yield losses up to 80%. Fusarium is seed- or soil-borne and it requires distinct soil temperatures for the development of epidemics. Inadequate fertilization might favor Fusarium epidemics. Cumin blight (Alternaria) appears in the form of dark brown spots on leaves and stems. When the weather is cloudy after flowering, the incidence of the disease is increased. Another, but less important, disease is powdery mildew. Incidence of powdery mildew in early development can cause drastic yield losses because no seeds are formed. Later in development, powdery mildew causes discolored, small seeds. Pathogens can lead to high reductions in crop yield. Cumin can be attacked by aphids (Myzus persicae) at the flowering stage. They suck the sap of the plant from tender parts and flowers. The plant becomes yellow, the seed formation is reduced (yield reduction), and the quality of the harvested product decreases. Heavily infested plant parts should be removed. Other important pests are the mites (Petrobia latens) which frequently attack the crop. Since the mites mostly feed on young leaves, the infestation is more severe on young inflorescences. The open canopy of cumin is another problem. Only a low proportion of the incoming light is absorbed. The leaf area index of cumin is low (about 1.5). This might be a problem because weeds can compete with cumin for essential resources such as water and light and thereby lower yield. The slow growth and the short stature of cumin favors weed competition additionally. Two hoeing and weeding sessions (30 and 60 days after sowing) are needed for the control of weeds. During the first weeding session (30 days after sowing), thinning should be done, as well, to remove excess plants. The use of preplant or pre-emergence herbicides is very effective in India, but this kind of herbicide application requires soil moisture for a successful weed control. Breeding Cumin is a diploid species with 14 chromosomes (i.e. 2n = 14). The chromosomes of the different varieties have morphological similarities with no distinct variation in length and volume. Most of the varieties available today are selections. The variabilities of yield and yield components are high. Varieties are developed by sib mating in enclosed chambers or by biotechnology. Cumin is a cross-pollinator, i.e. the breeds are already hybrids. Therefore, methods used for breeding are in vitro regenerations, DNA technologies, and gene transfers. The in vitro cultivation of cumin allows the production of genetically identical plants. The main sources for the explants used in vitro regenerations are embryos, hypocotyl, shoot internodes, leaves, and cotyledons. One goal of cumin breeding is to improve its resistance to biotic (fungal diseases) and abiotic (cold, drought, salinity) stresses. The potential genetic variability for conventional breeding of cumin is limited and research about cumin genetics is scarce. Uses Cumin seed is used as a spice for its distinctive flavor and aroma. Cumin can be found in some cheeses, such as Leyden cheese, and in some traditional breads from France. Cumin can be an ingredient in chili powder (often Tex-Mex or Mexican-style) and is found in achiote blends, adobos, sofrito, garam masala, curry powder, and bahaarat, and is used to flavor numerous commercial food products. In Indian and other South Asian cuisine, it is often combined with coriander seeds in a powdered mixture called dhana jeera. Cumin can be used ground or as whole seeds. It imparts an earthy, warming and aromatic character to food, making it a staple in certain stews and soups, as well as spiced gravies such as curry and chili. It is also used as an ingredient in some pickles and pastries. Traditional In India, the seeds are powdered and used in different forms such as kashaya (decoction), arishta (fermented decoction), and vati (tablet/pills), and processed with ghee (a semifluid clarified butter). In traditional medicine practices of several countries, dried cumin seeds are believed to have medicinal purposes, although there is no scientific evidence for any use as a drug or medicine. Volatiles and essential oil Cuminaldehyde, cymene, and terpenoids are the major volatile components of cumin oil, which is used for a variety of flavors, perfumes, and essential oil. Cumin oil may be used as an ingredient in some cosmetics. Aroma Cumin's flavor and warm aroma are due to its essential oil content, primarily the aroma compound cuminaldehyde. Other aroma compounds of toasted cumin are the substituted pyrazines, 2-ethoxy-3-isopropylpyrazine, , and 2-methoxy-3-methylpyrazine. Other components include γ-terpinene, safranal, p-cymene, and β-pinene. Nutritional value In a reference amount of , cumin seeds provide high amounts of the Daily Value for fat (especially monounsaturated fat), protein, and dietary fiber (table). B vitamins, vitamin E, and several dietary minerals, especially iron, magnesium, and manganese, are present in substantial Daily Value amounts.
Biology and health sciences
Apiales
null
44970293
https://en.wikipedia.org/wiki/Kepler-442b
Kepler-442b
Kepler-442b (also known by its Kepler object of interest designation KOI-4742.01) is a confirmed near-Earth-sized exoplanet, likely rocky, orbiting within the habitable zone of the K-type main-sequence star Kepler-442, about from Earth in the constellation of Lyra. The planet orbits its host star at a distance of about with an orbital period of roughly 112.3 days. It has a mass of around 2.3 and has a radius of about 1.34 times that of Earth. It is one of the more promising candidates for potential habitability, as its parent star is at least 40% less massive than the Sun – thus, it can have a lifespan of about 30 billion years. The planet was discovered by NASA's Kepler spacecraft using the transit method, in which it measures the dimming effect that a planet causes as it crosses in front of its star. NASA announced the confirmation of the exoplanet on 6 January 2015. Physical characteristics Mass, radius, and temperature Kepler-442b is a super-Earth, an exoplanet with a mass and radius bigger than Earth's but smaller than the ice giants Uranus and Neptune. It has an equilibrium temperature of . It has a radius of 1.34 and the mass estimated to be 2.36 . According to Ethan Siegel, this puts the planet "right on the border" between likely being a rocky planet and a Mini-Neptune gas planet. The surface gravity on Kepler-442b would be 30% stronger than Earth, assuming a rocky composition similar to that of Earth. Host star The planet orbits a (K-type) star named Kepler-442. The star has a mass of 0.61 and a radius of 0.60 . It has a temperature of and is around 2.9 billion years old, with some uncertainty. In comparison, our Sun is 4.6 billion years old and has a temperature of . The star is somewhat metal-poor, with a metallicity (Fe/H) of −0.37, or 43% of the solar amount. Its luminosity () is 12% that of the Sun. The star's apparent magnitude, or how bright it appears from Earth's perspective, is 14.76. Therefore, it is too dim to be seen with the naked eye. Orbit Kepler-442b orbits its host star with an orbital period of 112 days. It has an orbital radius of about (slightly larger than the distance of Mercury from the Sun, which is approximately ). It receives about 70% of Earth's sunlight from the Sun. Habitability The planet is in the habitable zone of its star, a region where liquid water could exist on the planet's surface. It is one of the most Earth-like planets yet found in size and temperature. It is just outside the zone (around ) in which tidal forces from its host star would be enough to fully tidally lock it. As of July 2018, Kepler-442b was considered the most habitable non-tidally-locked exoplanet discovered. Stellar factors K-type main-sequence stars are smaller than the Sun and live longer, remaining on the main sequence for 18 to 34 billion years compared to the Sun's estimated lifespan of 10 billion years. Despite these properties, the small M-type and K-type stars can threaten life. Because of their high stellar activity at the beginning of their lives, they emit strong solar winds. The duration of this period is inversely linked to the size of the star. However, because of the uncertainty of the age of Kepler-442, it is likely it may have passed this stage, making Kepler-442b potentially more suitable for habitability. Tidal effects and further reviews Because Kepler-442b is closer to its star than Earth is to the Sun, the planet will probably rotate much more slowly than Earth; its day could be weeks or months long (see Tidal effects on rotation rate, axial tilt, and orbit). This is reflected in its orbital distance, just outside of the point where the tidal interactions from its star would be strong enough to tidally lock it. Kepler-442b's axial tilt (obliquity) is likely tiny, in which case it would not have tilt-induced seasons as Earth and Mars do. Its orbit is probably close to circular (eccentricity 0.04), so it will also lack eccentricity-induced seasonal changes like Mars. One review essay in 2015 concluded that Kepler-442b, Kepler-186f, and Kepler-62f were likely the best candidates for being potentially habitable planets. Also, according to an index developed in 2015, Kepler-442b is even more likely to be habitable than a hypothetical "Earth twin" with physical and orbital parameters matching those of Earth. Going by this index, Earth has a rating of 0.829, but Kepler-442b has a rating of 0.836. The actual habitability is uncertain because Kepler-442b's atmosphere and surface are unknown. The paper introducing the habitability index clarifies that a higher-than-Earth value "does not mean these planets are 'more habitable' than Earth". Discovery and follow-up studies In 2009, NASA's Kepler spacecraft was completing observing stars on its photometer, the instrument it uses to detect transit events when a planet crosses in front of and dims its host star for a brief and roughly regular period. In this last test, Kepler observed 50,000 stars in the Kepler Input Catalog, including Kepler-442; the telescope sent the preliminary light curves to the Kepler science team for analysis, who chose prominent planetary companions from the bunch for follow-up at observatories. Observations for the potential exoplanet candidates took place between 13 May 2009 and 17 March 2012. After observing the respective transits, which for Kepler-442b occurred roughly every 113 days (its orbital period), the scientists eventually concluded that a planetary body was responsible for the periodic 113-day transits. The discovery, along with the unique planetary systems of the stars Kepler-438 and Kepler-440, was announced on 6 January 2015. Kepler-442b, located approximately 370 parsecs (1,200 light-years) away, presents a challenge for current telescopes and even the upcoming generation of planned ones to ascertain its mass or the presence of an atmosphere due to the considerable distance from its host star. The Kepler spacecraft concentrated on a limited portion of the sky, limiting its ability to gather comprehensive data. However, upcoming planet-hunting space telescopes like TESS and CHEOPS are poised to survey nearby stars across the entire celestial sphere, potentially shedding light on the properties of distant exoplanets like Kepler-442b. The James Webb Space Telescope and future large ground-based telescopes can then study nearby stars with planets to analyze atmospheres, determine masses, and infer compositions. Additionally, the Square Kilometer Array would significantly improve radio observations over the Arecibo Observatory and Green Bank Telescope.
Physical sciences
Notable exoplanets
Astronomy
31926330
https://en.wikipedia.org/wiki/Clinical%20neuroscience
Clinical neuroscience
Clinical neuroscience is a branch of neuroscience that focuses on the scientific study of fundamental mechanisms that underlie diseases and disorders of the brain and central nervous system. It seeks to develop new ways of conceptualizing and diagnosing such disorders and ultimately of developing novel treatments. A clinical neuroscientist is a scientist who has specialized knowledge in the field. Not all clinicians are clinical neuroscientists. Clinicians and scientists -including psychiatrists, neurologists, clinical psychologists, neuroscientists, and other specialists—use basic research findings from neuroscience in general and clinical neuroscience in particular to develop diagnostic methods and ways to prevent and treat neurobiological disorders. Such disorders include addiction, Alzheimer's disease, amyotrophic lateral sclerosis, anxiety disorders, attention deficit hyperactivity disorder, autism, bipolar disorder, brain tumors, depression, Down syndrome, dyslexia, epilepsy, Huntington's disease, multiple sclerosis, neurological AIDS, neurological trauma, pain, obsessive-compulsive disorder, Parkinson's disease, schizophrenia, sleep disorders, stroke and Tourette syndrome. While neurology, neurosurgery and psychiatry are the main medical specialties that use neuroscientific information, other specialties such as cognitive neuroscience, neuroradiology, neuropathology, ophthalmology, otorhinolaryngology, anesthesiology and rehabilitation medicine can contribute to the discipline. Integration of the neuroscience perspective alongside other traditions like psychotherapy, social psychiatry or social psychology will become increasingly important. One Mind for Research The "One Mind for Research" forum was a convention held in Boston, Massachusetts on May 23–25, 2011 that produced the blueprint document A Ten-Year Plan for Neuroscience: From Molecules to Brain Health. Leading neuroscience researchers and practitioners in the United States contributed to the creation of this document, in which 17 key areas of opportunities are listed under the Clinical Neuroscience section. These include the following: Rethinking curricula to break down intellectual silos Training translational neuroscientists and clinical investigators Investigating biomarkers Improving psychiatric diagnosis Developing a “Framingham Study of Brain Disorders” (i.e. longitudinal cohort for central nervous system disease) Identifying developmental risk factors and producing effective interventions Discovering new treatments for pain, including neuropathic pain Treating disorders of neural signaling and pathological synchrony Treating disorders of immunity or inflammation Treating metabolic and mitochondrial disorders Developing new treatments for depression Treating addictive disorders Improving treatment of schizophrenia Preventing and treating cerebrovascular disease Achieving personalized medicine Understanding shared mechanisms of neurodegeneration Advancing anesthesia In particular, it advocates for better integrated and scientifically driven curricula for practitioners, and it recommends that such curricula be shared among neurologists, psychiatrists, psychologists, neurosurgeons and neuroradiologists. Given the various ethical, legal and societal implications for healthcare practitioners arising from advances in neuroscience, the University of Pennsylvania inaugurated the Penn Conference on Clinical Neuroscience and Society in July 2011. Similarities between Other Fields of Neuroscience Neuropsychology As Byproducts of Neuroscience, they do not share the same objective as their parent (Neuroscience), as such focus on specific fields. While Clinical Neuroscience is more focused on the anatomy and how the brain would react to specific types of disorders and how to prevent them. Clinical Neuropsychology is more focused on how the brain functions and understands behaviors..) Yet both of these fields can be both applied to aiding and preventing mental disorders, alongside the diagnosing of brain disorders and assessing cognitive and mental behaviors. Scientists who research Neuropsychology are able to assist and aid subjects in a clinical manner rather than biological. Treating people they come to as patients rather than subjects in a way. Neuropsychology is more research intensive, requiring existential knowledge in the field of Psychology. Most Neuropsychologists have acquired their Doctoral Degree's due to how research extensive the topic may be, making the field extremely competitive in the job market. Progress into Neuropsychology is equivalent to becoming a therapist if not equally time investing. Neuropsychiatry Neuropsychiatry is a field that connects the mind and the brain, looking at how both affect each other. It combines ideas from both neurology (the study of the brain and nervous system) and psychiatry (the study of mental health), and focuses on treating problems related to thinking, emotions, and behavior that come from brain disorders. Rather than focusing on just one part of a problem (like a specific brain issue or mental health symptom), neuropsychiatry takes a broader approach. It recognizes that many brain disorders, like Parkinson’s or Alzheimer's, can affect mood or thinking, and many mental health conditions, like depression or schizophrenia, have a neurological aspect too. Neuropsychology serves people from all across the entire age, making it feasible to all across the world. Helping us identify developmental concerns within infants and other ages around childhood.
Biology and health sciences
Biology basics
Biology
36122619
https://en.wikipedia.org/wiki/Artificial%20life
Artificial life
Artificial life (ALife or A-Life) is a field of study wherein researchers examine systems related to natural life, its processes, and its evolution, through the use of simulations with computer models, robotics, and biochemistry. The discipline was named by Christopher Langton, an American computer scientist, in 1986. In 1987, Langton organized the first conference on the field, in Los Alamos, New Mexico. There are three main kinds of alife, named for their approaches: soft, from software; hard, from hardware; and wet, from biochemistry. Artificial life researchers study traditional biology by trying to recreate aspects of biological phenomena. Overview Artificial life studies the fundamental processes of living systems in artificial environments in order to gain a deeper understanding of the complex information processing that define such systems. These topics are broad, but often include evolutionary dynamics, emergent properties of collective systems, biomimicry, as well as related issues about the philosophy of the nature of life and the use of lifelike properties in artistic works. Philosophy The modeling philosophy of artificial life strongly differs from traditional modeling by studying not only "life as we know it" but also "life as it could be". A traditional model of a biological system will focus on capturing its most important parameters. In contrast, an alife modeling approach will generally seek to decipher the most simple and general principles underlying life and implement them in a simulation. The simulation then offers the possibility to analyse new and different lifelike systems. Vladimir Georgievich Red'ko proposed to generalize this distinction to the modeling of any process, leading to the more general distinction of "processes as we know them" and "processes as they could be". At present, the commonly accepted definition of life does not consider any current alife simulations or software to be alive, and they do not constitute part of the evolutionary process of any ecosystem. However, different opinions about artificial life's potential have arisen: The strong alife (cf. Strong AI) position states that "life is a process which can be abstracted away from any particular medium" (John von Neumann) . Notably, Tom Ray declared that his program Tierra is not simulating life in a computer but synthesizing it. The weak alife position denies the possibility of generating a "living process" outside of a chemical solution. Its researchers try instead to simulate life processes to understand the underlying mechanics of biological phenomena. Software-based ("soft") Techniques Cellular automata were used in the early days of artificial life, and are still often used for ease of scalability and parallelization. Alife and cellular automata share a closely tied history. Artificial neural networks are sometimes used to model the brain of an agent. Although traditionally more of an artificial intelligence technique, neural nets can be important for simulating population dynamics of organisms that can learn. The symbiosis between learning and evolution is central to theories about the development of instincts in organisms with higher neurological complexity, as in, for instance, the Baldwin effect. Neuroevolution Program-based Program-based simulations contain organisms with a "genome" language. This language is more often in the form of a Turing complete computer program than actual biological DNA. Assembly derivatives are the most common languages used. An organism "lives" when its code is executed, and there are usually various methods allowing self-replication. Mutations are generally implemented as random changes to the code. Use of cellular automata is common but not required. Another example could be an artificial intelligence and multi-agent system/program. Module-based Individual modules are added to a creature. These modules modify the creature's behaviors and characteristics either directly, by hard coding into the simulation (leg type A increases speed and metabolism), or indirectly, through the emergent interactions between a creature's modules (leg type A moves up and down with a frequency of X, which interacts with other legs to create motion). Generally, these are simulators that emphasize user creation and accessibility over mutation and evolution. Parameter-based Organisms are generally constructed with pre-defined and fixed behaviors that are controlled by various parameters that mutate. That is, each organism contains a collection of numbers or other finite parameters. Each parameter controls one or several aspects of an organism in a well-defined way. Neural net–based These simulations have creatures that learn and grow using neural nets or a close derivative. Emphasis is often, although not always, on learning rather than on natural selection. Complex systems modeling Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). In black-box models, the individual-based (mechanistic) mechanisms of a complex dynamic system remain hidden. Black-box models are completely nonmechanistic. They are phenomenological and ignore a composition and internal structure of a complex system. Due to the non-transparent nature of the model, interactions of subsystems cannot be investigated. In contrast, a white-box model of a complex dynamic system has ‘transparent walls’ and directly shows underlying mechanisms. All events at the micro-, meso- and macro-levels of a dynamic system are directly visible at all stages of a white-box model's evolution. In most cases, mathematical modelers use the heavy black-box mathematical methods, which cannot produce mechanistic models of complex dynamic systems. Grey-box models are intermediate and combine black-box and white-box approaches. Creation of a white-box model of complex system is associated with the problem of the necessity of an a priori basic knowledge of the modeling subject. The deterministic logical cellular automata are necessary but not sufficient condition of a white-box model. The second necessary prerequisite of a white-box model is the presence of the physical ontology of the object under study. The white-box modeling represents an automatic hyper-logical inference from the first principles because it is completely based on the deterministic logic and axiomatic theory of the subject. The purpose of the white-box modeling is to derive from the basic axioms a more detailed, more concrete mechanistic knowledge about the dynamics of the object under study. The necessity to formulate an intrinsic axiomatic system of the subject before creating its white-box model distinguishes the cellular automata models of white-box type from cellular automata models based on arbitrary logical rules. If cellular automata rules have not been formulated from the first principles of the subject, then such a model may have a weak relevance to the real problem. Notable simulators This is a list of artificial life and digital organism simulators: Hardware-based ("hard") Hardware-based artificial life mainly consist of robots, that is, automatically guided machines able to do tasks on their own. Biochemical-based ("wet") Biochemical-based life is studied in the field of synthetic biology. It involves research such as the creation of synthetic DNA. The term "wet" is an extension of the term "wetware". Efforts toward "wet" artificial life focus on engineering live minimal cells from living bacteria Mycoplasma laboratorium and in building non-living biochemical cell-like systems from scratch. In May 2019, researchers reported a new milestone in the creation of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids. Open problems How does life arise from the nonliving? Generate a molecular proto-organism in vitro. Achieve the transition to life in an artificial chemistry in silico. Determine whether fundamentally novel living organizations can exist. Simulate a unicellular organism over its entire life cycle. Explain how rules and symbols are generated from physical dynamics in living systems. What are the potentials and limits of living systems? Determine what is inevitable in the open-ended evolution of life. Determine minimal conditions for evolutionary transitions from specific to generic response systems. Create a formal framework for synthesizing dynamical hierarchies at all scales. Determine the predictability of evolutionary consequences of manipulating organisms and ecosystems. Develop a theory of information processing, information flow, and information generation for evolving systems. How is life related to mind, machines, and culture? Demonstrate the emergence of intelligence and mind in an artificial living system. Evaluate the influence of machines on the next major evolutionary transition of life. Provide a quantitative model of the interplay between cultural and biological evolution. Establish ethical principles for artificial life. Related subjects Agent-based modeling is used in artificial life and other fields to explore emergence in systems. Artificial intelligence has traditionally used a top down approach, while alife generally works from the bottom up. Artificial chemistry started as a method within the alife community to abstract the processes of chemical reactions. Evolutionary algorithms are a practical application of the weak alife principle applied to optimization problems. Many optimization algorithms have been crafted which borrow from or closely mirror alife techniques. The primary difference lies in explicitly defining the fitness of an agent by its ability to solve a problem, instead of its ability to find food, reproduce, or avoid death. The following is a list of evolutionary algorithms closely related to and used in alife: Ant colony optimization Bacterial colony optimization Genetic algorithm Genetic programming Swarm intelligence Multi-agent system – A multi-agent system is a computerized system composed of multiple interacting intelligent agents within an environment. Evolutionary art uses techniques and methods from artificial life to create new forms of art. Evolutionary music uses similar techniques, but applied to music instead of visual art. Abiogenesis and the origin of life sometimes employ alife methodologies as well. Quantum artificial life applies quantum algorithms to artificial life systems. History Criticism Artificial life has had a controversial history. John Maynard Smith criticized certain artificial life work in 1994 as "fact-free science".
Biology and health sciences
Biology basics
Biology
23398075
https://en.wikipedia.org/wiki/Guppy
Guppy
The guppy (), also known as millionfish or the rainbow fish, is one of the world's most widely distributed tropical fish and one of the most popular freshwater aquarium fish species. It is a member of the family Poeciliidae and, like almost all American members of the family, is live-bearing. Guppies originate from northeast South America, but have been introduced to many environments and are now found all over the world. They are highly adaptable and thrive in many different environmental and ecological conditions. Male guppies, which are smaller than females, have ornamental caudal and dorsal fins. Wild guppies generally feed on a variety of food sources, including benthic algae and aquatic insect larvae. Guppies are used as a model organism in the fields of ecology, evolution, and behavioural studies. Taxonomy Guppies were first described in Venezuela as Poecilia reticulata by Wilhelm Peters in 1859 and as Lebistes poecilioides in Barbados by De Filippi in 1861. It was named Girardinus guppii by Albert Günther in honor of Robert John Lechmere Guppy, who sent specimens of the species from Trinidad to the Natural History Museum in London. It was reclassified as Lebistes reticulatus by Regan in 1913. Then in 1963, Rosen and Bailey brought it back to its original name, Poecilia reticulata. While the taxonomy of the species was frequently changed and resulted in many synonyms, "guppy" remains the common name even as Girardinus guppii is now considered a junior synonym of Poecilia reticulata. Distribution and habitat Guppies are native to Antigua and Barbuda, Barbados, Brazil, Guyana, Trinidad and Tobago, and Venezuela. However, guppies have been introduced to many different countries on every continent except Antarctica. Sometimes this has occurred accidentally, but most often as a means of mosquito control. The guppies were expected to eat the mosquito larvae and help slow the spread of malaria, but in many cases, these guppies have had a negative impact on native fish populations. Field studies reveal that guppies have colonized almost every freshwater body accessible to them in their natural ranges, especially in the streams located near the coastal fringes of mainland South America. Although not typically found there, guppies also have tolerance to brackish water and have colonized some brackish environments. They tend to be more abundant in smaller streams and pools than in large, deep, or fast-flowing rivers. They also are capable of being acclimated to full saltwater like their molly cousins. Description Guppies exhibit sexual dimorphism. While wild-type females are grey in body colour, males have splashes, spots, or stripes that can be any of a wide variety of colors. The development and exhibiting of color patterns in male guppies is usually due to the amount of thyroid hormone that they contain. The thyroid hormones not only influence color pattern, but control endocrine function in response to their environment. The size of guppies vary, but males are typically long, while females are long. A variety of fancy guppy strains are produced by breeders through selective breeding, characterized by different colours, patterns, shapes, and sizes of fins, such as snakeskin and grass varieties. Many domestic strains have morphological traits that are very distinct from the wild-type antecedents. Males and females of many domestic strains usually have larger body size and are much more lavishly ornamented than their wild-type antecedents. Guppies have 23 pairs of chromosomes, including one pair of sex chromosomes, the same number as humans. The genes responsible for male guppies' ornamentations are Y-chromosome linked and are heritable. Lifecycle Two generations of guppies per year occur in the wild. Guppies are well developed and capable of independent existence without further parental care by the time they are born. Young guppies school together and perform anti-predator tactics. Brood size is extremely variable, yet some consistent differences exist among populations depending on the predation level and other factors. Females of matching body sizes tend to produce more numerous but smaller-sized offspring in high-predation conditions. Female guppies first produce offspring at 10–20 weeks of age, and they continue to reproduce until 20–34 months of age. Male guppies mature in 7 weeks or less. Total lifespan of guppies in the wild varies greatly, but it is typically around 2 years. Variations in such life historic characteristics of guppies are observed in different populations, indicating that different evolutionary pressures exist. Maturity Guppies' body sizes are positively correlated with age, and their size at maturation varies highly depending on the predation risk of the occupied environments. Male and female guppies from high-predation regions mature faster and start reproducing earlier, and they devote more resources to reproduction than those from low-predation regions. Females from high-predation regions reproduce more frequently and produce more offspring per litter, indicating that they are more fecund than low-predation females. Female guppies' reproductive success is also related to age. Older females produce offspring with reduced size and at increased interbrood intervals. Senescence One major factor that affects wild guppies' senescence patterns is the mortality rate caused by predation. Guppies from high-predation environments suffer high extrinsic mortality rate because they are more likely to be killed by predators. Female guppies from high-predation environments experience a significant increase in mortality at 6 months of age, while those from low-predation environments do not suffer increased mortality until 16 months. However, guppies from high-predation environments were found to have longer lifespans because their reproductive lifespans are longer. No significant difference is seen in postreproductive lifespans. Population regulations In addition to senescence pattern, resource availability and density also matter in regulation of guppy populations. Guppies reduce their fecundity and reproductive allocation in response to scarce food. When food is abundant, they increase brood size. Differential reproductive allocation can be the cause of seasonality of life-history characteristics in some guppy populations. For example, during the wet season from May to December, guppies in the Northern Range of Trinidad reduce their investment in reproduction regardless of predation level, possibly in response to decreased food resources. Population density also matters in simpler environments because higher intraspecific competition causes a decrease in reproductive rate and somatic growth rate, and a corresponding increase in juvenile mortality rate due to cannibalism. It was confirmed that in low-predation environments, guppy populations are in part regulated by density. Ecology and behavior Mating Guppies have the mating system called polyandry, where females mate with multiple males. Multiple mating is beneficial for males because the males' reproductive success is directly related to how many times they mate. The cost of multiple mating for males is very low because they do not provide material benefit to the females or parental care to the offspring. Conversely, multiple mating can be disadvantageous for females because it reduces foraging efficiency and increases the chances of predation and parasitic infection. However, females gain some potential benefits from multiple mating. For example, females that mate multiple times are found to be able to produce more offspring in shorter gestation time, and their offspring tend to have better qualities such as enhanced schooling and predator evasion abilities. Female guppies mate again more actively and delay the development of a brood when the anticipated second mate is more attractive than the first male. Experiments show that remating females prefer a novel male to the original male or a brother of the original male with similar phenotypes. Females' preference for novel males in remating can explain the excessive phenotypic polymorphism in male guppies. Inbreeding avoidance Inbreeding ordinarily has negative fitness consequences (inbreeding depression), and as a result species have evolved mechanisms to avoid inbreeding. Inbreeding depression is considered to be due largely to the expression of homozygous deleterious recessive mutations. Numerous inbreeding avoidance mechanisms operating prior to mating have been described. However, inbreeding avoidance mechanisms that operate subsequent to copulation are less well known. In guppies, a post-copulatory mechanism of inbreeding avoidance occurs based on competition between sperm of rival males for achieving fertilization. In competitions between sperm from an unrelated male and from a full sibling male, a significant bias in paternity towards the unrelated male was observed. Females' mating choice Female guppy choice plays an important role in multiple mating. Female guppies are attracted to brightly colored males, especially ones with orange spots on the flank. Orange spots can serve as an indicator of better physical fitness, as orange-spotted males are observed to swim longer in a strong current. There is also the concept of color association to possibly explain mate choice since one of the food sources wild guppies compete vigorously for is the fruit of cabrehash trees (Sloanea laurifolia), an orange carotenoid-containing fruit. The orange coloration that female guppies select for in males is composed of carotenoids, the saturation of which is affected by the male's carotenoid ingestion and parasite load. Guppies cannot synthesize these pigments by themselves and must obtain them through their diet. Because of this connection, females are possibly selecting for healthy males with superior foraging abilities by choosing mates with bright orange carotinoid pigments, thus increasing the survival chance of her offspring. Due to the advantage in mating, male guppies evolve to have more ornamentation across generations in low-predation environments where the cost of being conspicuous is lower. The rate and duration of courtship display of male guppies also play an important role in female guppies' mating choice. Courtship behavior is another indicator of fitness due to the physical strength involved in maintaining the courtship dance, called sigmoid display, in which the males flex their bodies into an S shape and vibrate rapidly. Female mating choice may also be influenced by another female's choice. In an experiment, female guppies watched two males, one solitary and the other actively courting another female, and were given a choice between the two. Most females spent a longer time next to the male that was courting. Female guppies' preference for fit males allows their descendants to inherit better physical fitness and better chance of survival. Predation Guppies have many predators, such as larger fish and birds, in their native range. Some of their common predators in the wild are Crenicichla alta, Anablepsoides hartii, and Aequidens pulcher. Guppies' small bodies and the bright coloration of males make them easy prey, and like many fish, they often school together to avoid predation. Schooling is more favored by evolution in populations of guppies under high predation pressure, exerted either by predator type or predator density. Male guppies rely on schooling, in particular the behavioral responses of females, to make antipredator decisions. Coloration of guppies also evolves differentially in response to predation. Male guppies that are brighter in color have an advantage in mating as they attract more females in general, but they have a higher risk of being noticed by predators than duller males. Male guppies evolve to be more dull in color and have fewer, smaller spots under intense predation both in wild and in laboratory settings. Female guppies in a high-predation environment also evolve to prefer brightly colored males less, often rejecting them. Predator inspection When guppies encounter a potential predator, some of them approach the predator to assess danger. This behavior, called predator inspection, benefits the inspector since it gains information, but puts the inspector at a risk of predation. To reduce the risk, inspectors avoid the predator's mouth area—called the 'attack cone'—and approach the predator from the side or back. They may also form a group for protection, the size of which is larger in high-predation populations. Although evidence indicates predators are less likely to attack an inspector than a non-inspector, the inspectors remain at higher risk due to proximity to the predator. Risk-taking behaviors such as predator inspection can be evolutionarily stable only when a mechanism prevents selfish individuals from taking advantage of "altruistic" individuals. Guppies may adopt a conditional-approach strategy that resembles tit for tat. According to this hypothesis, guppies would inspect the predator on the first move, but if their co-inspectors do not participate in the predator inspection visits or do not approach the predator close enough, they can retaliate by copying the defector's last move in the next predator inspection visit. The hypothesis was supported in laboratory experiments. Predator diversion When guppies detect a predator, their irises rapidly darken from silver to jet black, which draws predators to attack the guppies' head instead of their body's center of mass. Perhaps counterintuitively, this predator divertive behavior allows guppies to rapidly pivot out of the way as predators lunge where the guppies' head was; this "matador-like" anti-predator behavior was first described in guppies but may be found in other animal species with bright, attention-grabbing coloration located on vital organs, such as epaulette sharks. Parasites Guppies are also host to a range of parasites and one of these, Gyrodactylus turnbulli, has been used as a model system for studying host-parasite interactions. Recent work on this has shown that the interaction between exposure to chronic anthropogenic noise and G. turnbulli can decrease guppy survival. While a short burst of underwater noise has positive effects on parasite densities on the host. Most likely resulting in negative fitness effects for guppies. Feeding Wild guppies feed on algal remains, diatoms, invertebrates, zooplankton, detritus, plant fragments, mineral particles, aquatic insect larvae, and other sources. Algal remains constitute the biggest proportion of wild guppy diet in most cases, but diets vary depending on the specific conditions of food availability in the habitat. For example, a study on wild Trinidad guppies showed that guppies collected from an oligotrophic upstream region (upper Aripo River) mainly consumed invertebrates, while guppies from a eutrophic downstream region (lower Tacarigua River) consumed mostly diatoms and mineral particles. Algae are less nutritious than invertebrates, and the guppies that feed mainly on algae have poor diets. Guppies have also been observed eating native fishes' eggs, occasionally expressing cannibalism, also eating its own young, when kept in laboratory conditions. Guppies' diet preference is not simply correlated to the abundance of a particular food. Laboratory experiments confirmed that guppies show 'diet switching' behavior, in which they feed disproportionately on the more abundant food when they are offered two food choices. The result shows that different groups of guppies have weak and variable food preference. Diet preference in guppies could be related to factors such as the presence of competitors. For example, the lower Tacarigua River has a larger variety of species and competition for invertebrate prey is higher; therefore, the proportion of invertebrates is small in the diets of those guppies. Foraging Guppies often forage in groups because they can find food more easily. Shoaling guppies spend less time and energy on antipredatory behavior than solitary ones and spend more time on feeding. However, such behavior results in food that is found being shared with other members of the group. Studies also show when an evolutionary cost exists, guppies that tend to shoal are less aggressive and less competitive with regards to scarce resources. Therefore, shoaling is preferred in high-predation regions, but not in low-predation regions. When guppies with a high tendency to shoal were isolated from high-predation regions and were relocated to predator-free environments, over time, they decreased their shoaling behavior, supporting the hypothesis that shoaling is less preferred in low-predation environments. Reproduction Guppies are highly prolific livebearers. The gestation period of guppies varies considerably, ranging from 20 to 60 days at 25 to 27 C and depending on several environmental factors. Reproduction typically continues through the year, and the female becomes ready for conception again quickly after parturition. Male guppies, like other members of the family Poeciliidae, possess a modified tubular anal fin called the gonopodium, located directly behind the ventral fin. The gonopodium has a channel-like structure through which bundles of spermatozoa, called spermatozeugmata, are transferred to females. In courted mating, where the female shows receptive behavior following the male's courtship display, the male briefly inserts the gonopodium into the female's genital pore for internal fertilization. However, in the case of sneaky mating where copulation is forced, the male approaches the female and thrusts the gonopodium at the female's urogenital pore. Once inseminated, female guppies can store sperm in their ovaries and gonoducts, which can continue to fertilize ova up to eight months. Because of the sperm-storage mechanism, males are capable of posthumous reproduction, meaning the female mate can give birth to the male's offspring long after the male's death, which contributes significantly to the reproductive dynamics of the wild guppy populations. The guppy has been successfully hybridised with various species of molly (Poecilia latipinna or P. velifera), e.g., male guppy and female molly. However, the hybrids are always male and appear to be infertile. The guppy has also been hybridised with the Endler's livebearer (Poecilia wingei) to produce fertile offspring, with the suggestion that, despite physical and behavioural differences, Endler's may represent a subspecies of Poecilia reticulata rather than a distinct species. Inbreeding depression Due to the extensive selective breeding of guppies for desirable traits such as greater size and colour, some strains of the fish have become less hardy than their wild counterparts. Immense inbreeding of guppies has been found to affect body size, fertility and susceptibility to diseases. In the aquarium Guppies prefer a hard-water aquarium with a temperature between and salt levels equivalent to one tablespoon per . They can withstand levels of salinity up to 150% that of normal seawater, which has led to them being occasionally included in marine tropical community tanks, as well as in freshwater tropical tanks. Guppies are generally peaceful, though nipping behaviour is sometimes exhibited between male guppies or towards other top swimmers like members of the genus Xiphophorus (platies and swordtails), and occasionally other fish with prominent fins, such as angelfish. Guppies should not be kept as a single fish in an aquarium because both males and females show signs of shoaling, and are usually found in large groups in the wild. Its most famous characteristic is its propensity for breeding, and it can breed in both freshwater and marine aquaria. Guppies prefer water temperatures around for reproduction. Pregnant female guppies have enlarged and darkened gravid spots near their anal vents. Just before birth, the eyes of fry may be seen through the translucent skin in this area of the female's body. When birth occurs, individual offspring are dropped in sequence, typically over a period of one to six hours. The female guppy has drops of two to 200 fry at a time, though typically ranging between 30 and 60. Well-fed adults do not often eat their own young, although sometimes safe zones are required for the fry. Specially designed livebearer birthing tanks, which can be suspended inside the aquarium, are available from aquatic retailers. These also serve to shield the pregnant female from further attention from the males, which is important because the males sometimes attack the females while they are giving birth. It also provides a separate area for the newborn young as protection from being eaten by their mother. However, if a female is put in the breeder box too early, it may cause her to have a miscarriage. Well-planted tanks that offer barriers to adult guppies shelter the young quite well. Guppy grass, water sprite, water wisteria, duckweed, water lettuce and java moss are all good choices. A continuous supply of live food, such as Daphnia or brine shrimp, keep adult fish full and may spare the fry when they are born. Young fry take roughly three or four months to reach maturity. Feeding fry live foods, such as baby brine shrimp, microworms, infusoria and vinegar eels, is recommended. Alternatives include finely ground flake food, egg yolk, and liquid fish food, though the particulates in these may be too large for the youngest fry to eat. Common diseases Guppies are susceptible to various diseases, which may stem from bacterial, parasitic, or fungal infections. Maintaining a clean tank, a balanced diet, and regular monitoring can help in preventing these diseases. Ichthyophthirius multifiliis (Ich) Ichthyophthirius multifiliis, commonly known as ich, is a protozoan parasite that infects guppies and other freshwater fish. The infection is characterized by white cysts appearing on the skin, gills, and fins of the affected fish, giving a distinct white spot appearance which is often referred to as "white spot disease". The life cycle of Ichthyophthirius multifiliis involves three stages: the trophont stage, the tomont stage, and the theront stage. Fin rot Fin rot is primarily caused by bacterial infections, although fungal infections can also be a culprit. The condition manifests through the progressive decay or fraying of the fins, often accompanied by discoloration, usually turning the edges of the fins white, black, or red. The primary causative agents of fin rot are gram-negative bacteria such as Pseudomonas fluorescens and Aeromonas hydrophila. Poor water quality, overcrowding, and stress are significant contributors to the onset and progression of the disease, as they create an environment conducive for bacterial growth and can compromise the fish's immune system. Columnaris Columnaris, also known as cotton mouth disease or cotton wool disease, is a common bacterial infection in guppies and other freshwater fish, caused by the bacterium Flavobacterium columnare. This bacterium thrives in warm, freshwater environments. Treatment for columnaris should commence promptly to prevent severe mortality. Common treatment measures include: improving water quality, antibacterial medications such as kanamycin, erythromycin, or oxytetracycline, and in extreme cases, antibiotic injections. Velvet disease Velvet, also known as gold dust disease, is a prevalent ailment caused by the dinoflagellate parasites of the genus Oodinium. When these parasites attach to a fish's skin, gills, and eyes, they trigger a range of symptoms. Notable symptoms include a fine gold or rust-colored dust appearing on the fish's body, clamped fins, scratching against objects, rapid gill movement due to irritation, decreased feeding, lethargy, and, in advanced stages, respiratory distress. Swim bladder disease Swim bladder disease is a common condition which impairs their ability to maintain buoyancy. This condition is associated with the swim bladder, a gas-filled organ that aids fish in remaining buoyant at varying water depths. The symptoms of swim bladder disease are quite distinctive and include difficulty in maintaining buoyancy which causes the fish to either float to the top or sink to the bottom, abnormal swimming patterns such as swimming on the side or upside down, and a bloated appearance or a visibly enlarged belly. Several factors can contribute to the onset of swim bladder disease. Overfeeding is a common cause, leading to constipation which may press against the swim bladder. Bacterial or viral infections affecting the swim bladder can also trigger this condition. Physical injury or congenital deformities of the swim bladder are other potential causes.
Biology and health sciences
Acanthomorpha
null
24873453
https://en.wikipedia.org/wiki/Season
Season
A season is a division of the year based on changes in weather, ecology, and the number of daylight hours in a given region. On Earth, seasons are the result of the axial parallelism of Earth's tilted orbit around the Sun. In temperate and polar regions, the seasons are marked by changes in the intensity of sunlight that reaches the Earth's surface, variations of which may cause animals to undergo hibernation or to migrate, and plants to be dormant. Various cultures define the number and nature of seasons based on regional variations, and as such there are a number of both modern and historical definitions of the seasons. The Northern Hemisphere experiences most direct sunlight during May, June, and July (thus the traditional celebration of Midsummer in June), as the hemisphere faces the Sun. For the Southern Hemisphere it is instead in November, December, and January. It is Earth's axial tilt that causes the Sun to be higher in the sky during the summer months, which increases the solar flux. Because of seasonal lag, June, July, and August are the warmest months in the Northern Hemisphere while December, January, and February are the warmest months in the Southern Hemisphere. In temperate and sub-polar regions, four seasons based on the Gregorian calendar are generally recognized: spring, summer, autumn (fall), and winter. Ecologists often use a six-season model for temperate climate regions which are not tied to any fixed calendar dates: prevernal, vernal, estival, serotinal, autumnal, and hibernal. Many tropical regions have two seasons: the rainy/wet/monsoon season and the dry season. Some have a third cool, mild, or harmattan season. "Seasons" can also be dictated by the timing of important ecological events such as hurricane season, tornado season, and wildfire season. Some examples of historical importance are the ancient Egyptian seasons—flood, growth, and low water—which were previously defined by the former annual flooding of the Nile in Egypt. Seasons often hold special significance for agrarian societies, whose lives revolve around planting and harvest times, and the change of seasons is often attended by ritual. The definition of seasons is also cultural. In India, from ancient times to the present day, six seasons or Ritu based on south Asian religious or cultural calendars are recognised and identified for purposes such as agriculture and trade. Causes and effects Axial parallelism The Earth's orbit exhibits approximate axial parallelism, maintaining its direction toward Polaris (the "North Star") year-round. This is one of the primary reasons for the Earth's seasons, as illustrated by the diagram to the right. Minor variation in the direction of the axis, known as axial precession, takes place over the course of 26,000 years, and therefore is not noticeable to modern human civilization. Axial tilt The seasons result from the Earth's axis of rotation being tilted with respect to its orbital plane by an angle of approximately 23.4 degrees. (This tilt is also known as "obliquity of the ecliptic".) Regardless of the time of year, the northern and southern hemispheres always experience opposite seasons. This is because during summer or winter, one part of the planet is more directly exposed to the rays of the Sun than the other, and this exposure alternates as the Earth revolves in its orbit. For approximately half of the year (from around March20 to around September22), the Northern Hemisphere tips toward the Sun, with the maximum amount occurring on about June21. For the other half of the year, the same happens, but in the Southern Hemisphere instead of the Northern, with the maximum around December21. The two instants when the Sun is directly overhead at the Equator are the equinoxes. Also at that moment, both the North Pole and the South Pole of the Earth are just on the terminator, and hence day and night are equally divided between the two hemispheres. Around the March equinox, the Northern Hemisphere will be experiencing spring as the hours of daylight increase, and the Southern Hemisphere is experiencing autumn as daylight hours shorten. The effect of axial tilt is observable as the change in day length and the altitude of the Sun at solar noon (the Sun's culmination) during the year. The low angle of the Sun during the winter months means that incoming rays of solar radiation are spread over a larger area of the Earth's surface, so the light received is more indirect and of lower intensity. Between this effect and the shorter daylight hours, the axial tilt of the Earth accounts for most of the seasonal variation in climate in both hemispheres. Elliptical Earth orbit Compared to axial parallelism and axial tilt, other factors contribute little to seasonal temperature changes. The seasons are not the result of the variation in Earth's distance to the Sun because of its elliptical orbit. In fact, Earth reaches perihelion (the point in its orbit closest to the Sun) in January, and it reaches aphelion (the point farthest from the Sun) in July, so the slight contribution of orbital eccentricity opposes the temperature trends of the seasons in the Northern Hemisphere. In general, the effect of orbital eccentricity on Earth's seasons is a 7% variation in sunlight received. Orbital eccentricity can influence temperatures, but on Earth, this effect is small and is more than counteracted by other factors; research shows that the Earth as a whole is actually slightly warmer when farther from the sun. This is because the Northern Hemisphere has more land than the Southern, and land warms more readily than sea. Any noticeable intensification of southern winters and summers due to Earth's elliptical orbit is mitigated by the abundance of water in the Southern Hemisphere. Maritime and hemispheric Seasonal weather fluctuations (changes) also depend on factors such as proximity to oceans or other large bodies of water, currents in those oceans, El Niño/ENSO and other oceanic cycles, and prevailing winds. In the temperate and polar regions, seasons are marked by changes in the amount of sunlight, which in turn often causes cycles of dormancy in plants and hibernation in animals. These effects vary with latitude and with proximity to bodies of water. For example, the South Pole is in the middle of the continent of Antarctica and therefore a considerable distance from the moderating influence of the southern oceans. The North Pole is in the Arctic Ocean, and thus its temperature extremes are buffered by the water. The result is that the South Pole is consistently colder during the southern winter than the North Pole during the northern winter. The seasonal cycle in the polar and temperate zones of one hemisphere is opposite to that of the other. When it is summer in the Northern Hemisphere, it is winter in the Southern, and vice versa. Tropics The tropical and (to a lesser degree) subtropical regions see little annual fluctuation of sunlight and temperature due to Earth's moderate 23.4-degree tilt being insufficient to appreciably affect the strength of the sun's rays annually. The slight differences between the solstices and the equinoxes cause seasonal shifts along a rainy low-pressure belt called the Intertropical Convergence Zone (ICZ). As a result, the amount of precipitation tends to vary more dramatically than the average temperature. When the ICZ is north of the Equator, the northern tropics experience their wet season while the southern tropics have their dry season. This pattern reverses when the ICZ migrates to a position south of the Equator. Mid-latitude thermal lag In meteorological terms, the solstices (the maximum and minimum insolation) do not fall in the middles of summer and winter. The heights of these seasons occur up to 7 weeks later because of seasonal lag. Seasons, though, are not always defined in meteorological terms. In astronomical reckoning by hours of daylight alone, the solstices and equinoxes are in the middle of the respective seasons. Because of seasonal lag due to thermal absorption and release by the oceans, regions with a continental climate, which predominate in the Northern Hemisphere, often consider these four dates to be the start of the seasons as in the diagram, with the cross-quarter days considered seasonal midpoints. The length of these seasons is not uniform because of Earth's elliptical orbit and its different speeds along that orbit. Four-season reckoning Most calendar-based partitions use a four-season model to demarcate the warmest and coldest seasons, which are further separated by two intermediate seasons. Calendar-based reckoning defines the seasons in relative rather than absolute terms, so the coldest quarter-year is considered winter even if floral activity is regularly observed during it, despite the traditional association of flowers with spring and summer. The major exception is in the tropics where, as already noted, the winter season is not observed. The four seasons have been in use since at least Roman times, as in Rerum rusticarum of Varro Varro says that spring, summer, autumn, and winter start on the 23rd day of the sun's passage through Aquarius, Taurus, Leo, and Scorpio, respectively. Nine years before he wrote, Julius Caesar had reformed the calendar, so Varro was able to assign the dates of February 7, May 9, August 11, and November 10 to the start of spring, summer, autumn, and winter. Official As noted, a variety of dates and even exact times are used in different countries or regions to mark changes of the calendar seasons. These observances are often declared "official" within their respective areas by the local or national media, even when the weather or climate is contradictory. These are mainly a matter of custom and not generally proclaimed by governments north or south of the equator for civil purposes. Meteorological Meteorological seasons are reckoned by temperature, with summer being the hottest quarter of the year and winter the coldest quarter of the year. In 1780 the Societas Meteorologica Palatina (which became defunct in 1795), an early international organization for meteorology, defined seasons as groupings of three whole months as identified by the Gregorian calendar. According to this definition, for temperate areas in the northern hemisphere, spring begins on 1 March, summer on 1 June, autumn on 1 September, and winter on 1 December. For the southern hemisphere temperate zone, spring begins on 1 September, summer on 1 December, autumn on 1 March, and winter on 1 June. In Australasia the meteorological terms for seasons apply to the temperate zone that occupies all of New Zealand, New South Wales, Victoria, Tasmania, the south-eastern corner of South Australia and the south-west of Western Australia, and the south east Queensland areas south of Brisbane. In Sweden and Finland, meteorologists and news outlets use the concept of thermal seasons, which are defined based on mean daily temperatures. The beginning of spring is defined as when the mean daily temperature permanently rises above 0 °C. The beginning of summer is defined as when the temperature permanently rises above +10 °C, autumn as when the temperature permanently falls below +10 °C, and winter as when the temperature permanently falls below 0 °C. In Finland, "permanently" is defined as when the mean daily averaged temperature remains above or below the defined limit for seven consecutive days. (In Sweden the number of days ranges from 5 to 7 depending on the season.) This implies two things: the seasons do not begin on fixed dates and must be determined by observation and are known only after the fact, the seasons begin on different dates in different parts of the country. The India Meteorological Department (IMD) designates four climatological seasons: Winter, occurring from December to February. The year's coldest months are December and January, when temperatures average around in the northwest; temperatures rise as one proceeds toward the equator, peaking around in mainland India's southeast. Summer or pre-monsoon season, lasting from March to May. In western and southern regions, the hottest month is April; for northern regions of India, May is the hottest month. Temperatures average around in most of the interior. Monsoon or rainy season, lasting from June to September. The season is dominated by the humid southwest summer monsoon, which slowly sweeps across the country beginning in late May or early June. Monsoon rains begin to recede from North India at the beginning of October. South India typically receives more rainfall. Post-monsoon or autumn season, lasting from October to November. In the northwest of India, October and November are usually cloudless. Tamil Nadu receives most of its annual precipitation in the northeast monsoon season. In China, a common temperature-based reckoning holds that it is winter for the period when temperatures are below 10°C on average and summer for the period when temperatures are above 22°C on average. This means that areas with relatively extreme climates (such as the Paracel Islands and parts of the Tibetan plateau) may be said to have summer all year round or winter all year round. Astronomical Astronomical timing as the basis for designating the temperate seasons dates back at least to the Julian Calendar used by the ancient Romans. As mentioned above, Varro wrote that spring, summer, autumn, and winter start on the 23rd day of the sun's passage through Aquarius, Taurus, Leo, and Scorpio, respectively, and that (in the Julian Calendar) these days were February 7, May 9, August 11, and November 10. He points out that the lengths are not equal, being 91 (in non-leap years), 94, 91, and 89 days for spring, summer, autumn, and winter, respectively. The midpoints of these seasons were March 24 or 25, June 25, September 25 or 26, and December 24 or 25, which are near to the equinoxes and solstices of his day. Pliny the Elder, in his Natural History, mentions the two equinoxes and the two solstices and gives the lengths of the intervals (values which were fairly correct in his day but are no longer very correct because the perihelion has moved from December into January). He then defines the seasons of autumn, winter, spring, and summer as starting half-way through these intervals. He gives "the eighth day to the Kalends of January" (December 25) as the date of the winter solstice, though actually it occurred on the 22nd or 23rd at that time. At the present time, the astronomical timing has winter starting at the winter solstice, spring at the spring equinox, and so on. This is used worldwide, although some countries like Australia, New Zealand, Pakistan and Russia prefer to use meteorological reckoning. The precise timing of the seasons is determined by the exact times of the sun reaching the tropics of Cancer and Capricorn for the solstices and the times of the sun's transit over the equator for the equinoxes, or a traditional date close to these times. The following diagram shows the relation between the line of solstice and the line of apsides of Earth's elliptical orbit. The orbital ellipse (with eccentricity exaggerated for effect) goes through each of the six Earth images, which are sequentially the perihelion (periapsis—nearest point to the sun) on anywhere from 2 January to 5 January, the point of March equinox on 19, 20 or 21 March, the point of June solstice on 20 or 21 June, the aphelion (apoapsis—farthest point from the sun) on anywhere from 3 July to 6 July, the September equinox on 22 or 23 September, and the December solstice on 21 or 22 December. These "astronomical" seasons are not of equal length, because of the elliptical nature of the orbit of the Earth, as discovered by Johannes Kepler. From the March equinox it currently takes 92.75 days until the June solstice, then 93.65 days until the September equinox, 89.85 days until the December solstice and finally 88.99 days until the March equinox. Thus the time from the March equinox to the September equinox is 7.56 days longer than from the September equinox to the March equinox. Variation due to calendar misalignment The times of the equinoxes and solstices are not fixed with respect to the modern Gregorian calendar, but fall about six hours later every year, amounting to one full day in four years. They are reset by the occurrence of a leap year. The Gregorian calendar is designed to keep the March equinox no later than 21 March as accurately as is practical. The calendar equinox (used in the calculation of Easter) is 21 March, the same date as in the Easter tables current at the time of the Council of Nicaea in AD 325. The calendar is therefore framed to prevent the astronomical equinox wandering onto 22 March. From Nicaea to the date of the reform, the years 500, 600, 700, 900, 1000, 1100, 1300, 1400, and 1500, which would not have been leap years in the Gregorian calendar, amount to nine extra days, but astronomers directed that ten days be removed. Because of this, the (proleptic) Gregorian calendar agrees with the Julian calendar in the third century of the Christian era, rather than in the fourth. Currently, the most common equinox and solstice dates are March 20, June 21, September 22 or 23, and December 21; the four-year average slowly shifts to earlier times as a century progresses. This shift is a full day in about 128 years (compensated mainly by the century "leap year" rules of the Gregorian calendar); as 2000 was a leap year, the current shift has been progressing since the beginning of the last century, when equinoxes and solstices were relatively late. This also means that in many years of the twentieth century, the dates March 21, June 22, September 23, and December 22 were much more common, so older books teach (and older people may still remember) these dates. All the times are given in UTC (roughly speaking, the time at Greenwich, ignoring British Summer Time). People living farther to the east (Asia and Australia), whose local times are in advance, see the astronomical seasons apparently start later; for example, in Tonga (UTC+13), an equinox occurred on September 24, 1999, a date on which the equinox will not fall again until 2103. On the other hand, people living far to the west (America), whose clocks run behind UTC, may experience an equinox as early as March 19. Change over time Over thousands of years, the Earth's axial tilt and orbital eccentricity vary (see Milankovitch cycles). The equinoxes and solstices move westward relative to the stars while the perihelion and aphelion move eastward. Thus, ten thousand years from now Earth's northern winter will occur at aphelion and northern summer at perihelion. The severity of seasonal change — the average temperature difference between summer and winter in location — will also change over time because the Earth's axial tilt fluctuates between 22.1 and 24.5 degrees. Smaller irregularities in the times are caused by perturbations of the Moon and the other planets. Solar Solar timing is based on insolation in which the solstices and equinoxes are seen as the midpoints of the seasons. This was the case with the seasons described by the Roman scholar Varro (see above). It was the method for reckoning seasons in medieval Europe, especially by the Celts, and is still ceremonially observed in Ireland and some East Asian countries. Summer is defined as the quarter of the year with the greatest insolation and winter as the quarter with the least. The solar seasons change at the cross-quarter days, which are about 3–4 weeks earlier than the meteorological seasons and 6–7 weeks earlier than seasons starting at equinoxes and solstices. Thus, the day of greatest insolation is designated "midsummer" as noted in William Shakespeare's play A Midsummer Night's Dream, which is set on the summer solstice. On the Celtic calendar, the start of the seasons corresponds to four Pagan agricultural festivals - the traditional first day of winter is 1 November (Samhain, the Celtic origin of Halloween); spring starts 1 February (Celtic Imbolc); summer begins 1 May (Beltane, the Celtic origin of May Day); the first day of autumn is 1 August (Celtic Lughnasadh). Solar terms The traditional calendar in China has 4 seasons based on 24 periods, twelve of which are called zhōngqi and twelve of which are known as jiéqi. These periods are collectively known in English as "solar terms" or "solar breaths". The four seasons chūn (), xià (), qiū (), and dōng ()—translated as "spring", "summer", "autumn", and "winter"—each center on the respective solstice or equinox. Astronomically, the seasons are said to begin on Lichun (, "the start of spring") on about 4 February, Lixia () on about 6 May, Liqiu () on about 8 August, and Lidong () on about 8 November. This system forms the basis of other such systems in East Asian lunisolar calendars. Six-season reckoning Some calendars in south Asia use a six-season partition where the number of seasons between summer and winter can number from one to three. The dates are fixed at even intervals of months. In the Hindu calendar of tropical and subtropical India, there are six seasons or Ritu that are calendar-based in the sense of having fixed dates: Vasanta (spring), Grishma (summer), Varsha (monsoon), Sharada (autumn), Hemanta (early winter), and Shishira (prevernal or late winter). The six seasons are ascribed to two months each of the twelve months in the Hindu calendar. The rough correspondences are: The Bengali Calendar is similar but differs in start and end times. It has the following seasons or ritu: The Odia Calendar is similar but differs in start and end times. The Tamil calendar follows a similar pattern of six seasons Non-calendar-based reckoning Ecologically speaking, a season is a period of the year in which only certain types of floral and animal events happen (e.g.: flowers bloom—spring; hedgehogs hibernate—winter). So, if a change in daily floral and animal events can be observed, the season is changing. In this sense, ecological seasons are defined in absolute terms, unlike calendar-based methods in which the seasons are relative. If specific conditions associated with a particular ecological season do not normally occur in a particular region, then that area cannot be said to experience that season on a regular basis. Modern mid-latitude ecological Six ecological seasons can be distinguished without fixed calendar-based dates like the meteorological and astronomical seasons. Oceanic regions tend to experience the beginning of the hibernal season up to a month later than continental climates. Conversely, prevernal and vernal seasons begin up to a month earlier near oceanic and coastal areas. For example, prevernal crocus blooms typically appear as early as February in coastal areas of British Columbia, the British Isles, but generally do not appear until March or April in locations like the Midwestern United States and parts of eastern Europe. The actual dates for each season vary by climate region and can shift from one year to the next. Average dates listed here are for mild and cool temperate climate zones in the Northern Hemisphere: Prevernal (early or pre-spring): Begins February (mild temperate), to March (cool temperate). Deciduous tree buds begin to swell. Some types of migrating birds fly from winter to summer habitats. Vernal (spring): Begins mid March (mild temperate), to late April (cool temperate). Tree buds burst into leaves. Birds establish territories and begin mating and nesting. Estival (high summer): Begins June in most temperate climates. Trees in full leaf. Birds hatch and raise offspring. Serotinal (late summer): Generally begins mid to late August. Deciduous leaves begin to change color in higher latitude locations (above 45 north). Young birds reach maturity and join other adult birds preparing for autumn migration. The traditional "harvest season" begins by early September. Autumnal (autumn): Generally begins mid to late September. Tree leaves in full color then turn brown and fall to the ground. Birds migrate back to wintering areas. Hibernal (winter): Begins December (mild temperate), November (cool temperate). Deciduous trees are bare and fallen leaves begin to decay. Migrating birds settled in winter habitats. Indigenous ecological Indigenous people in polar, temperate and tropical climates of northern Eurasia, the Americas, Africa, Oceania, and Australia have traditionally defined the seasons ecologically by observing the activity of the plants, animals and weather around them. Each separate tribal group traditionally observes different seasons determined according to local criteria that can vary from the hibernation of polar bears on the arctic tundras to the growing seasons of plants in the tropical rainforests. In Australia, some tribes have up to eight seasons in a year, as do the Sami people in Scandinavia. Many indigenous people who no longer live directly off the land in traditional often nomadic styles, now observe modern methods of seasonal reckoning according to what is customary in their particular country or region. The North American Cree and possibly other Algonquian speaking peoples used or still use a 6-season system. The extra two seasons denoting the freezing and breaking up of the ice on rivers and lakes. The Noongar people of South-West Western Australia recognise maar-keyen bonar, or six seasons. Each season's arrival is heralded not by a calendar date, but by environmental factors such as changing winds, flowering plants, temperature and migration patterns and lasts approximately two standard calendar months. The seasons also correlate to aspects of the human condition, intrinsically linking the lives of the people to the world that surrounds them and also dictating their movements, as with each season, various parts of country would be visited which were particularly abundant or safe from the elements. Tropical Two seasons In the tropics, where seasonal dates also vary, it is more common to speak of the rainy (or wet, or monsoon) season versus the dry season. For example, in Nicaragua the dry season (November to April) is called "summer" and the rainy season (May to October) is called "winter", even though it is located in the northern hemisphere. There is no noticeable change in the amount of sunlight at different times of the year. Instead, many regions (such as the northern Indian Ocean) have varying monsoon rain and wind cycles. Floral and animal activity variation near the equator depends more on wet/dry cycles than seasonal temperature variations, with different species flowering (or emerging from cocoons) at specific times before, during, or after the monsoon season. Thus, the tropics are characterized by numerous "mini-seasons" within the larger seasonal blocks of time. In the tropical parts of Australia in the northern parts of Queensland, Western Australia, and the Northern Territory, wet and dry seasons are observed in addition to or in place of temperate season names. Three seasons The most historically important of these are the three seasons—flood, growth, and low water—which were previously defined by the former annual flooding of the Nile in Egypt. In some tropical areas a three-way division into hot, rainy, and cool season is used. In Thailand three seasons are recognised Polar Any point north of the Arctic Circle or south of the Antarctic Circle will have one period in the summer called "polar day" when the sun does not set, and one period in the winter called 'polar night' when the sun does not rise. At progressively higher latitudes, the maximum periods of "midnight sun" and "polar night" are progressively longer. For example, at the military and weather station Alert located at 82°30′05″N and 62°20′20″W, on the northern tip of Ellesmere Island, Canada (about 450 nautical miles or 830 km from the North Pole), the sun begins to peek above the horizon for minutes per day at the end of February and each day it climbs higher and stays up longer; by 21 March, the sun is up for over 12 hours. On 6 April the sun rises at 0522 UTC and remains above the horizon until it sets below the horizon again on 6 September at 0335 UTC. By October 13 the sun is above the horizon for only 1 hour 30 minutes, and on October 14 it does not rise above the horizon at all and remains below the horizon until it rises again on 27 February. First light comes in late January because the sky has twilight, being a glow on the horizon, for increasing hours each day, for more than a month before the sun first appears with its disc above the horizon. From mid-November to mid-January, there is no twilight. In the weeks surrounding 21 June, in the northern polar region, the sun is at its highest elevation, appearing to circle the sky there without going below the horizon. Eventually, it does go below the horizon, for progressively longer periods each day until around the middle of October, when it disappears for the last time until the following February. For a few more weeks, "day" is marked by decreasing periods of twilight. Eventually, from mid-November to mid-January, there is no twilight and it is continuously dark. In mid January the first faint wash of twilight briefly touches the horizon (for just minutes per day), and then twilight increases in duration with increasing brightness each day until sunrise at end of February, then on 6 April the sun remains above the horizon until mid October. Military campaigning seasons Seasonal weather and climate conditions can become important in the context of military operations. Seasonal reckoning in the military of any country or region tends to be very fluid and based mainly on short to medium term weather conditions that are independent of the calendar. For navies, the presence of accessible ports and bases can allow naval operations during certain (variable) seasons of the year. The availability of ice-free or warm-water ports can make navies much more effective. Thus Russia, historically navally constrained when confined to using Arkhangelsk (before the 18th century) and even Kronstadt, has particular interests in maintaining and in preserving access to Baltiysk, Vladivostok, and Sevastopol. Storm seasons or polar winter-weather conditions can inhibit surface warships at sea. Pre-modern armies, especially in Europe, tended to campaign in the summer months - peasant conscripts tended to melt away at harvest time, nor did it make economic sense in an agricultural society to neglect the sowing season. Any modern war of manoeuvre profits from firm ground - summer can provide dry conditions suitable for marching and transport, frozen snow in winter can also offer a reliable surface for a period, but spring thaws or autumn rains can inhibit campaigning. Rainy-season floods may make rivers temporarily impassable, and winter snow tends to block mountain passes. Taliban offensives are usually confined to the Afghanistan fighting season.
Physical sciences
Earth science
null
35072403
https://en.wikipedia.org/wiki/Red%20Deer%20Cave%20people
Red Deer Cave people
The Red Deer Cave people were a prehistoric population of modern humans known from bones dated to between about 17,830 to c. 11,500 years ago, found in Red Deer Cave (Maludong, ) and Longlin Cave in Yunnan and Guangxi Provinces, in Southwest China. The fossils exhibit a mix of archaic and modern features and were tentatively thought to represent a late survival of an archaic human species, or of a hybrid population of Denisovan hominin and modern human descent, or alternatively just "an unfortunate overinterpretation and misinterpretation of robust early modern humans, probably with affinities to modern Melanesians". A partial genome sequence in 2022 established that, despite their morphologically unusual features, they were modern humans related to contemporary populations in East and Southeast Asia, as well as the Americas. Evidence shows large deer were cooked in the Red Deer Cave, giving the people their name. Discovery and dating In 1979, petroleum geologist Li Changqing discovered a block of fine-grained sediments containing human remains, animal fossils, charcoal, and burnt clay from a cave near the town of De'e, Longlin County, Guangxi Province, China. These are categorised as belonging to a single specimen, LL-1. He promptly shipped them to Kunming in the neighbouring Yunnan Province for further study, whereupon a mandible (lower jawbone) and some body bones were extracted. In 1989, the Red Deer Cave near Mengzi City, Yunnan Province, was also excavated for human remains. The significance of these finds would not be realised until Darren Curnoe, Ji Xueping, and colleagues began dating and describing existing collections of East Asian human fossils to better evaluate the poorly documented Asian archaeological record in 2008. They found the Red Deer Cave and Longlin people feature a suite of modern and archaic traits, yet lived surprisingly recently. Charcoal remnants inside the braincase were dated using uranium–thorium dating to only 17,830–13,290 years ago for various Red Deer Cave human specimens, and 11,510 years ago for LL-1. They restarted excavation of Longlin Cave in 2008, and yielded a few more human fossils, but most of the known material from the cave was recovered in the initial dig. In 2010, they were able to remove the rest of the skull and body fossils from the Red Deer Cave block. The dating of the bones has led to confusion and division among researchers. The anatomy of the bones, prior to successful DNA testing, suggested they were archaic humans, like early Homo erectus or Homo habilis who lived around 1.5 million years ago in Africa. In 2013, Curnoe, Ji, and colleagues hypothesised that the cave people possibly represented a new species. In 2015, Curnoe, Ji, and colleagues suggested the Red Deer Cave people represent a hybrid population between early modern humans and one or several unidentifiable native archaic species, since they bear a peculiar combination of archaic and modern features not exhibited in any other specimen. Modern humans may have entered China as early 130,000 years ago, as evidenced by the Zhirendong remains; though, owing to an unusual mosaic anatomy, the classification of such early specimens is debated. Later that year, they concluded the femur is far outside the range of variation for a modern human (that the Red Deer Cave people must be archaic). They suggested they either represent the enigmatic Denisovans—a poorly known group of late-surviving Homo which was apparently dispersed across Asia, currently only identifiable by their genetic signature—or a long-removed lineage from an incredibly early dispersal of Homo out of Africa which had not evolved a characteristically human body plan, such as that represented by the Dmanisi hominins. The latter scenario has also been proposed for H. floresiensis, which survived rather recently as well, probably due to being isolated on the island of Flores. They speculated the Red Deer Cave people persisted for a similar reason, isolated in the mountains. Anatomy In spite of their relatively recent age, the fossils exhibit archaic human features. The Red Deer Cave dwellers had distinctive features that differ from modern humans, including: flat face, broad nose, jutting jaw with no chin, large molars, prominent brows, thick skull bones, and moderate-size brain. As with some other pre-modern humans, their body size was small, with an estimated mass of . Curnoe's previous works showed the bones and teeth were remarkably similar to that of archaic humans. The height of the mandibular symphysis at is within the range of modern humans, and the thickness at the range of Neanderthals and Middle Palaeolithic modern humans. The mental foramina (a hole in the mandible) is placed rather low at from the base, whereas modern humans and Neanderthals are normally above . The height of the first two molars and the thickness at that level is nearly identical to Upper Palaeolithic Asians, but the molars themselves are proportionally quite broad like those of Neanderthals or Middle Palaeolithic humans. The Red Deer Cave femur is quite archaic, retaining some traits which have been lost in all anatomically modern humans. The subtrochanteric region (just below the lesser trochanter) is circular in cross-section and has a low total and cortical bone area, reducing resistance to axial (straight down) loads. The midshaft diameter is rather narrow, which could indicate the individual was short-statured. The femur also has a moderate pilaster value index (measuring the robustness of the linea aspera), notably lower than in anatomically modern humans. In sum, the femur recalls far earlier Lower Pleistocene Homo. The reconstruction of the Maludong femur confirmed it was very small with the outer shell, or walls, are very thin. The areas of the wall that were of high strain, and the femur neck, are relatively long; the place of muscle attachment for the primary flexor muscle of the hip (the lesser trochanter) was robust and faced strongly backward. Classification There was much initial speculation that the Red Deer Cave people represent an archaic human lineage, though researchers proved reluctant to classify them as an otherwise unknown, or little-known, species. The remains from Red Deer Cave bear morphological similarlities to archaic hominid lineages such as Homo erectus and Homo habilis. In particular, the RDC specimen was seen as anatomically most similar in most of the characteristics to an individual known as KNM-ER 1481, a member of H. erectus, who lived 1.89 million years ago in Africa. The remains have been described as exhibiting some similarities to Australopithecus (i.e more than the genus Homo). It was also suggested that they might have resulted from mating between Denisovans and anatomically modern humans (AMH), or, alternatively that they were an AMH population with unusual physiology. One theory suggested that the Red Deer Cave people were early humans that settled into the region more than 100,000 years ago and became isolated. The high mountains and deep valleys are ideal in isolating species geographically, so it is possible for a species to migrate to the area and become genetically isolated over time. The environment and climate of Southwest China are also unique owing to the tectonic uplift of the Qinghai-Tibetan Plateau. The successful sequencing of ancient genomic DNA from the Red Deer Cave skull, reported in July 2022, showed that the skull belonged to an anatomically modern human population that was genetically affiliated to modern East Asians, as well as, to a lesser extent, Native American populations. Additionally, this woman belonged to maternal haplogroup M9 - a genetic lineage that arose approximately 47,000-50,000 years ago, probably in South Asia.
Biology and health sciences
Homo
Biology
33496160
https://en.wikipedia.org/wiki/Mobile%20app
Mobile app
A mobile application or app is a computer program or software application designed to run on a mobile device such as a phone, tablet, or watch. Mobile applications often stand in contrast to desktop applications which are designed to run on desktop computers, and web applications which run in mobile web browsers rather than directly on the mobile device. Apps were originally intended for productivity assistance such as email, calendar, and contact databases, but the public demand for apps caused rapid expansion into other areas such as mobile games, factory automation, GPS and location-based services, order-tracking, and ticket purchases, so that there are now millions of apps available. Many apps require Internet access. Apps are generally downloaded from app stores, which are a type of digital distribution platforms. The term "app", short for "application", has since become very popular; in 2010, it was listed as "Word of the Year" by the American Dialect Society. Apps are broadly classified into three types: native apps, hybrid and web apps. Native applications are designed specifically for a mobile operating system, typically iOS or Android. Web apps are written in HTML5 or CSS and typically run through a browser. Hybrid apps are built using web technologies such as JavaScript, CSS, and HTML5 and function like web apps disguised in a native container. Overview Most mobile devices are sold with several apps bundled as pre-installed software, such as a web browser, email client, calendar, mapping program, and an app for buying music, other media, or more apps. Some pre-installed apps can be removed by an ordinary uninstall process, thus leaving more storage space for desired ones. Where the software does not allow this, some devices can be rooted to eliminate the undesired apps. Apps that are not preinstalled are usually available through distribution platforms called app stores. These may operated by the owner of the device's mobile operating system, such as the App Store or Google Play Store; by the device manufacturers, such as the Galaxy Store and Huawei AppGallery; or by third parties, such as the Amazon Appstore and F-Droid. Usually, they are downloaded from the platform to a target device, but sometimes they can be downloaded to laptops or desktop computers. Apps can also be installed manually, for example by running an Android application package on Android devices. Some apps are freeware, while others have a price, which can be upfront or a subscription. Some apps also include microtransactions and/or advertising. In any case, the revenue is usually split between the application's creator and the app store. The same app can, therefore, cost a different price depending on the mobile platform. Mobile apps were originally offered for general productivity and information retrieval, including email, calendar, contacts, the stock market and weather information. However, public demand and the availability of developer tools drove rapid expansion into other categories, such as those handled by desktop application software packages. As with other software, the explosion in number and variety of apps made discovery a challenge, which in turn led to the creation of a wide range of review, recommendation, and curation sources, including blogs, magazines, and dedicated online app-discovery services. In 2014 government regulatory agencies began trying to regulate and curate apps, particularly medical apps. Some companies offer apps as an alternative method to deliver content with certain advantages over an official website. With a growing number of mobile applications available at app stores and the improved capabilities of smartphones, people are downloading more applications to their devices. Usage of mobile apps has become increasingly prevalent across mobile phone users. A May 2012 comScore study reported that during the previous quarter, more mobile subscribers used apps than browsed the web on their devices: 51.1% vs. 49.8% respectively. Researchers found that usage of mobile apps strongly correlates with user context and depends on user's location and time of the day. Mobile apps are playing an ever-increasing role within healthcare and when designed and integrated correctly can yield many benefits. Market research firm Gartner predicted that 102 billion apps would be downloaded in 2013 (91% of them free), which would generate $26 billion in the US, up 44.4% on 2012's US$18 billion. By Q2 2015, the Google Play and Apple stores alone generated $5 billion. An analyst report estimates that the app economy creates revenues of more than €10 billion per year within the European Union, while over 529,000 jobs have been created in 28 EU states due to the growth of the app market. Types Mobile applications may be classified by numerous methods. A common scheme is to distinguish native, web-based, and hybrid apps. Native app All apps targeted toward a particular mobile platform are known as native apps. Therefore, an app intended for Apple device does not run in Android devices. As a result, most businesses develop apps for multiple platforms. While developing native apps, professionals incorporate best-in-class user interface modules. This accounts for better performance, consistency and good user experience. Users also benefit from wider access to application programming interfaces and make limitless use of all apps from the particular device. Further, they also switch over from one app to another effortlessly. The main purpose for creating such apps is to ensure best performance for a specific mobile operating system. Web-based app A web-based app is implemented with the standard web technologies of HTML, CSS, and JavaScript. Internet access is typically required for proper behavior or being able to use all features compared to offline usage. Most, if not all, user data is stored in the cloud. The performance of these apps is similar to a web application running in a browser, which can be noticeably slower than the equivalent native app. It also may not have the same level of features as the native app. Hybrid app The concept of the hybrid app is a mix of native and web-based apps. Apps developed using Apache Cordova, Flutter, Xamarin, React Native, Sencha Touch, and other frameworks fall into this category. These are made to support web and native technologies across multiple platforms. Moreover, these apps are easier and faster to develop. It involves use of single codebase which works in multiple mobile operating systems. Despite such advantages, hybrid apps exhibit lower performance. Often, apps fail to bear the same look-and-feel in different mobile operating systems. Development Developing apps for mobile devices requires considering the constraints and features of these devices. Mobile devices run on battery and have less powerful processors than personal computers and also have more features such as location detection and cameras. Developers also have to consider a wide array of screen sizes, hardware specifications and configurations because of intense competition in mobile software and changes within each of the platforms (although these issues can be overcome with mobile device detection). Mobile application development requires the use of specialized integrated development environments. Mobile apps are first tested within the development environment using emulators and later subjected to field testing. Emulators provide an inexpensive way to test applications on mobile phones to which developers may not have physical access. Mobile user interface (UI) Design is also essential. Mobile UI considers constraints and contexts, screen, input and mobility as outlines for design. The user is often the focus of interaction with their device, and the interface entails components of both hardware and software. User input allows for the users to manipulate a system, and device's output allows the system to indicate the effects of the users' manipulation. Mobile UI design constraints include limited attention and form factors, such as a mobile device's screen size for a user's hand. Mobile UI contexts signal cues from user activity, such as location and scheduling that can be shown from user interactions within a mobile application. Overall, mobile UI design's goal is primarily for an understandable, user-friendly interface. Mobile UIs, or front-ends, rely on mobile back-ends to support access to enterprise systems. The mobile back-end facilitates data routing, security, authentication, authorization, working off-line, and service orchestration. This functionality is supported by a mix of middleware components including mobile app servers, Mobile Backend as a service (MBaaS), and SOA infrastructure. Conversational interfaces display the computer interface and present interactions through text instead of graphic elements. They emulate conversations with real humans. There are two main types of conversational interfaces: voice assistants (like the Amazon Echo) and chatbots. Conversational interfaces are growing particularly practical as users are starting to feel overwhelmed with mobile apps (a term known as "app fatigue"). David Limp, Amazon's senior vice president of devices, says in an interview with Bloomberg, "We believe the next big platform is voice." Distribution The three biggest app stores are Google Play for Android, App Store for iOS, and Microsoft Store for Windows 10, Windows 10 Mobile, and Xbox One. Google Play Google Play (formerly known as the Android Market) is an international online software store developed by Google for Android devices. It opened in October 2008. In July 2013, the number of apps downloaded via the Google Play Store surpassed 50 billion, of the over 1 million apps available. As of September 2016, according to Statista the number of apps available exceeded 2.4 million. Over 80% of apps in the Google Play Store are free to download. The store generated a revenue of 6 billion U.S. dollars in 2015. App Store Apple's App Store for iOS and iPadOS was not the first app distribution service, but it ignited the mobile revolution and was opened on July 10, 2008, and as of September 2016, reported over 140 billion downloads. The original AppStore was first demonstrated to Steve Jobs in 1993 by Jesse Tayler at NeXTWorld Expo As of June 6, 2011, there were 425,000 apps available, which had been downloaded by 200 million iOS users. During Apple's 2012 Worldwide Developers Conference, CEO Tim Cook announced that the App Store has 650,000 available apps to download as well as 30 billion apps downloaded from the app store until that date. From an alternative perspective, figures seen in July 2013 by the BBC from tracking service Adeven indicate over two-thirds of apps in the store are "zombies", barely ever installed by consumers. Microsoft Store Microsoft Store (formerly known as the Windows Store) was introduced by Microsoft in 2012 for its Windows 8 and Windows RT platforms. While it can also carry listings for traditional desktop programs certified for compatibility with Windows 8, it is primarily used to distribute "Windows Store apps"—which are primarily built for use on tablets and other touch-based devices (but can still be used with a keyboard and mouse, and on desktop computers and laptops). Others Amazon Appstore is an alternative application store for the Android operating system. It was opened in March 2011 and as of June 2015, the app store has nearly 334,000 apps. The Amazon Appstore's Android Apps can also be installed and run on BlackBerry 10 devices. BlackBerry World is the application store for BlackBerry 10 and BlackBerry OS devices. It opened in April 2009 as BlackBerry App World. Ovi (Nokia) for Nokia phones was launched internationally in May 2009. In May 2011, Nokia announced plans to rebrand its Ovi product line under the Nokia brand and Ovi Store was renamed Nokia Store in October 2011. Nokia Store will no longer allow developers to publish new apps or app updates for its legacy Symbian and MeeGo operating systems from January 2014. Windows Phone Store was introduced by Microsoft for its Windows Phone platform, which was launched in October 2010. , it has over 120,000 apps available. Samsung Apps was introduced in September 2009. As of October 2011, Samsung Apps reached 10 million downloads. The store is available in 125 countries and it offers apps for Windows Mobile, Android and Bada platforms. The Electronic AppWrapper was the first electronic distribution service to collectively provide encryption and purchasing electronically F-Droid — Free and open Source Android app repository. Opera Mobile Store is a platform independent app store for iOS, Java, BlackBerry OS, Symbian, iOS, and Windows Mobile, and Android based mobile phones. It was launched internationally in March, 2011. There are numerous other independent app stores for Android devices. Enterprise management Mobile application management (MAM) describes software and services responsible for provisioning and controlling access to internally developed and commercially available mobile apps used in business settings. The strategy is meant to off-set the security risk of a Bring Your Own Device (BYOD) work strategy. When an employee brings a personal device into an enterprise setting, mobile application management enables the corporate IT staff to transfer required applications, control access to business data, and remove locally cached business data from the device if it is lost, or when its owner no longer works with the company. Containerization is an alternate approach to security. Rather than controlling an employee/s entire device, containerization apps create isolated pockets separate from personal data. Company control of the device only extends to that separate container. App wrapping vs. native app management Especially when employees "bring your own device" (BYOD), mobile apps can be a significant security risk for businesses, because they transfer unprotected sensitive data to the Internet without knowledge and consent of the users. Reports of stolen corporate data show how quickly corporate and personal data can fall into the wrong hands. Data theft is not just the loss of confidential information, but makes companies vulnerable to attack and blackmail. Professional mobile application management helps companies protect their data. One option for securing corporate data is app wrapping. But there also are some disadvantages like copyright infringement or the loss of warranty rights. Functionality, productivity and user experience are particularly limited under app wrapping. The policies of a wrapped app can not be changed. If required, it must be recreated from scratch, adding cost. An app wrapper is a mobile app made wholly from an existing website or platform, with few or no changes made to the underlying application. The "wrapper" is essentially a new management layer that allows developers to set up usage policies appropriate for app use. Examples of these policies include whether or not authentication is required, allowing data to be stored on the device, and enabling/disabling file sharing between users. Because most app wrappers are often websites first, they often do not align with iOS or Android Developer guidelines. Alternatively, it is possible to offer native apps securely through enterprise mobility management. This enables more flexible IT management as apps can be easily implemented and policies adjusted at any time.
Technology
Computer software
null
46383767
https://en.wikipedia.org/wiki/Agrometeorology
Agrometeorology
Agrometeorology is the study of weather and use of weather and climate information to enhance or expand agricultural crops or to increase crop production. Agrometeorology mainly involves the interaction of meteorological and hydrological factors, on one hand and agriculture, which encompasses horticulture, animal husbandry, and forestry. Description It is an interdisciplinary, holistic science forming a bridge between physical and biological sciences and beyond. It deals with a complex system involving soil, plant, atmosphere, agricultural management options, and others, which are interacting dynamically on various spatial and temporal scales. Specifically, the fully coupled soil-plant-atmosphere system has to be well understood in order to develop reasonable operational applications or recommendations for stakeholders. For these reasons, a comprehensive analysis of cause-effect relationships and principles that describe the influence of the state of the atmosphere, plants, and soil on different aspects of agricultural production, as well as the nature and importance of feedback between these elements of the system is necessary. Agrometeorological methods therefore use information and data from different key sciences such as soil physics and chemistry, hydrology, meteorology, crop and animal physiology and phenology, agronomy, and others. Observed information is often combined in more or less complex models, focused on various components of system parts such as mass balances (i.e. soil carbon, nutrients, and water), biomass production, crop growth and yield, and crop or pest phenology in order to detect sensitivities or potential responses of the soil-biosphere-atmosphere system. Limitations and research Model applications still involve many uncertainties, which calls for further improvements of the description of system processes. A better quality of operational applications at various scales (monitoring, forecasting, warning, recommendations, etc.) is crucial for stakeholders. For example, new methods for spatial applications involve GIS and Remote Sensing for spatial data presentation and generation. Furthermore, tailor-made products and information transfer are critical to allow effective management decisions in the short and long term. These should cover sustainability and enhancement strategies (including risk management, mitigation and adaptation) considering climate variability and change. Papers are invited addressing these problems in the context of agrometeorological applications in “atmosphere” as an actual and important contribution to the state of the art.
Technology
Academic disciplines
null
44984325
https://en.wikipedia.org/wiki/Replication%20crisis
Replication crisis
The replication crisis is an ongoing methodological crisis in which the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge. The replication crisis is frequently discussed in relation to psychology and medicine, where considerable efforts have been undertaken to reinvestigate classic results, to determine whether they are reliable, and if they turn out not to be, the reasons for the failure. Data strongly indicates that other natural and social sciences are affected as well. The phrase replication crisis was coined in the early 2010s as part of a growing awareness of the problem. Considerations of causes and remedies have given rise to a new scientific discipline, metascience, which uses methods of empirical research to examine empirical research practice. Considerations about reproducibility can be placed into two categories. Reproducibility in the narrow sense refers to re-examining and validating the analysis of a given set of data. Replication refers to repeating the experiment or study to obtain new, independent data with the goal of reaching the same or similar conclusions. Background Replication Replication has been called "the cornerstone of science". Environmental health scientist Stefan Schmidt began a 2009 review with this description of replication: But there is limited consensus on how to define replication and potentially related concepts. A number of types of replication have been identified: Direct or exact replication, where an experimental procedure is repeated as closely as possible. Systematic replication, where an experimental procedure is largely repeated, with some intentional changes. Conceptual replication, where a finding or hypothesis is tested using a different procedure. Conceptual replication allows testing for generalizability and veracity of a result or hypothesis. Reproducibility can also be distinguished from replication, as referring to reproducing the same results using the same data set. Reproducibility of this type is why many researchers make their data available to others for testing. The replication crisis does not necessarily mean these fields are unscientific. Rather, this process is part of the scientific process in which old ideas or those that cannot withstand careful scrutiny are pruned, although this pruning process is not always effective. A hypothesis is generally considered to be supported when the results match the predicted pattern and that pattern of results is found to be statistically significant. Results are considered significant whenever the relative frequency of the observed pattern falls below an arbitrarily chosen value (i.e. the significance level) when assuming the null hypothesis is true. This generally answers the question of how unlikely results would be if no difference existed at the level of the statistical population. If the probability associated with the test statistic exceeds the chosen critical value, the results are considered statistically significant. The corresponding probability of exceeding the critical value is depicted as p < 0.05, where p (typically referred to as the "p-value") is the probability level. This should result in 5% of hypotheses that are supported being false positives (an incorrect hypothesis being erroneously found correct), assuming the studies meet all of the statistical assumptions. Some fields use smaller p-values, such as p < 0.01 (1% chance of a false positive) or p < 0.001 (0.1% chance of a false positive). But a smaller chance of a false positive often requires greater sample sizes or a greater chance of a false negative (a correct hypothesis being erroneously found incorrect). Although p-value testing is the most commonly used method, it is not the only method. Statistics Certain terms commonly used in discussions of the replication crisis have technically precise meanings, which are presented here. In the most common case, null hypothesis testing, there are two hypotheses, a null hypothesis and an alternative hypothesis . The null hypothesis is typically of the form "X and Y are statistically independent". For example, the null hypothesis might be "taking drug X does not change 1-year recovery rate from disease Y", and the alternative hypothesis is that it does change. As testing for full statistical independence is difficult, the full null hypothesis is often reduced to a simplified null hypothesis "the effect size is 0", where "effect size" is a real number that is 0 if the full null hypothesis is true, and the larger the effect size is, the more the null hypothesis is false. For example, if X is binary, then the effect size might be defined as the change in the expectation of Y upon a change of X:Note that the effect size as defined above might be zero even if X and Y are not independent, such as when . Since different definitions of "effect size" capture different ways for X and Y to be dependent, there are many different definitions of effect size. In practice, effect sizes cannot be directly observed, but must be measured by statistical estimators. For example, the above definition of effect size is often measured by Cohen's d estimator. The same effect size might have multiple estimators, as they have tradeoffs between efficiency, bias, variance, etc. This further increases the number of possible statistical quantities that can be computed on a single dataset. When an estimator for an effect size is used for statistical testing, it is called a test statistic. A null hypothesis test is a decision procedure which takes in some data, and outputs either or . If it outputs , it is usually stated as "there is a statistically significant effect" or "the null hypothesis is rejected". Often, the statistical test is a (one-sided) threshold test, which is structured as follows: Gather data . Compute a test statistic for the data. Compare the test statistic against a critical value/threshold . If , then output , else, output . A two-sided threshold test is similar, but with two thresholds, such that it outputs if either or There are 4 possible outcomes of a null hypothesis test: false negative, true negative, false positive, true positive. A false negative means that is true, but the test outcome is ; a true negative means that is true, and the test outcome is , etc. Significance level, false positive rate, or the alpha level, is the probability of finding the alternative to be true when the null hypothesis is true:For example, when the test is a one-sided threshold test, then where means "the data is sampled from ". Statistical power, true positive rate, is the probability of finding the alternative to be true when the alternative hypothesis is true:where is also called the false negative rate. For example, when the test is a one-sided threshold test, then . Given a statistical test and a data set , the corresponding p-value is the probability that the test statistic is at least as extreme, conditional on . For example, for a one-sided threshold test, If the null hypothesis is true, then the p-value is distributed uniformly on . Otherwise, it is typically peaked at and roughly exponential, though the precise shape of the p-value distribution depends on what the alternative hypothesis is. Since the p-value is distributed uniformly on conditional on the null hypothesis, one may construct a statistical test with any significance level by simply computing the p-value, then output if . This is usually stated as "the null hypothesis is rejected at significance level ", or "", such as "smoking is correlated with cancer (p < 0.001)". History The beginning of the replication crisis can be traced to a number of events in the early 2010s. Philosopher of science and social epistemologist Felipe Romero identified four events that can be considered precursors to the ongoing crisis: Controversies around social priming research: In the early 2010s, the well-known "elderly-walking" study by social psychologist John Bargh and colleagues failed to replicate in two direct replications. This experiment was part of a series of three studies that had been widely cited throughout the years, was regularly taught in university courses, and had inspired a large number of conceptual replications. Failures to replicate the study led to much controversy and a heated debate involving the original authors. Notably, many of the conceptual replications of the original studies also failed to replicate in subsequent direct replications. Controversies around experiments on extrasensory perception: Social psychologist Daryl Bem conducted a series of experiments supposedly providing evidence for the controversial phenomenon of extrasensory perception. Bem was highly criticized for his study's methodology and upon reanalysis of the data, no evidence was found for the existence of extrasensory perception. The experiment also failed to replicate in subsequent direct replications. According to Romero, what the community found particularly upsetting was that many of the flawed procedures and statistical tools used in Bem's studies were part of common research practice in psychology. Amgen and Bayer reports on lack of replicability in biomedical research: Scientists from biotech companies Amgen and Bayer Healthcare reported alarmingly low replication rates (11–20%) of landmark findings in preclinical oncological research. Publication of studies on p-hacking and questionable research practices: Since the late 2000s, a number of studies in metascience showed how commonly adopted practices in many scientific fields, such as exploiting the flexibility of the process of data collection and reporting, could greatly increase the probability of false positive results. These studies suggested how a significant proportion of published literature in several scientific fields could be nonreplicable research. This series of events generated a great deal of skepticism about the validity of existing research in light of widespread methodological flaws and failures to replicate findings. This led prominent scholars to declare a "crisis of confidence" in psychology and other fields, and the ensuing situation came to be known as the "replication crisis". Although the beginning of the replication crisis can be traced to the early 2010s, some authors point out that concerns about replicability and research practices in the social sciences had been expressed much earlier. Romero notes that authors voiced concerns about the lack of direct replications in psychological research in the late 1960s and early 1970s. He also writes that certain studies in the 1990s were already reporting that journal editors and reviewers are generally biased against publishing replication studies. In the social sciences, the blog Data Colada (whose three authors coined the term "p-hacking" in a 2014 paper) has been credited with contributing to the start of the replication crisis. University of Virginia professor and cognitive psychologist Barbara A. Spellman has written that many criticisms of research practices and concerns about replicability of research are not new. She reports that between the late 1950s and the 1990s, scholars were already expressing concerns about a possible crisis of replication, a suspiciously high rate of positive findings, questionable research practices (QRPs), the effects of publication bias, issues with statistical power, and bad standards of reporting. Spellman also identifies reasons that the reiteration of these criticisms and concerns in recent years led to a full-blown crisis and challenges to the status quo. First, technological improvements facilitated conducting and disseminating replication studies, and analyzing large swaths of literature for systemic problems. Second, the research community's increasing size and diversity made the work of established members more easily scrutinized by other community members unfamiliar with them. According to Spellman, these factors, coupled with increasingly limited resources and misaligned incentives for doing scientific work, led to a crisis in psychology and other fields. According to Andrew Gelman, the works of Paul Meehl, Jacob Cohen, and Tversky and Kahneman in the 1960s-70s were early warnings of replication crisis. In discussing the origins of the problem, Kahneman himself noted historical precedents in subliminal perception and dissonance reduction replication failures. It had been repeatedly pointed out since 1962 that most psychological studies have low power (true positive rate), but low power persisted for 50 years, indicating a structural and persistent problem in psychological research. Prevalence In psychology Several factors have combined to put psychology at the center of the conversation. Some areas of psychology once considered solid, such as social priming and ego depletion, have come under increased scrutiny due to failed replications. Much of the focus has been on social psychology, although other areas of psychology such as clinical psychology, developmental psychology, and educational research have also been implicated. In August 2015, the first open empirical study of reproducibility in psychology was published, called The Reproducibility Project: Psychology. Coordinated by psychologist Brian Nosek, researchers redid 100 studies in psychological science from three high-ranking psychology journals (Journal of Personality and Social Psychology, Journal of Experimental Psychology: Learning, Memory, and Cognition, and Psychological Science). 97 of the original studies had significant effects, but of those 97, only 36% of the replications yielded significant findings (p value below 0.05). The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies. The same paper examined the reproducibility rates and effect sizes by journal and discipline. Study replication rates were 23% for the Journal of Personality and Social Psychology, 48% for Journal of Experimental Psychology: Learning, Memory, and Cognition, and 38% for Psychological Science. Studies in the field of cognitive psychology had a higher replication rate (50%) than studies in the field of social psychology (25%). Of the 64% of non-replications, only 25% disproved the original result (at statistical significance). The other 49% were inconclusive, neither supporting nor contradicting the original result. This is because many replications were underpowered, with a sample 2.5 times smaller than the original. A study published in 2018 in Nature Human Behaviour replicated 21 social and behavioral science papers from Nature and Science, finding that only about 62% could successfully reproduce original results. Similarly, in a study conducted under the auspices of the Center for Open Science, a team of 186 researchers from 60 different laboratories (representing 36 different nationalities from six different continents) conducted replications of 28 classic and contemporary findings in psychology. The study's focus was not only whether the original papers' findings replicated but also the extent to which findings varied as a function of variations in samples and contexts. Overall, 50% of the 28 findings failed to replicate despite massive sample sizes. But if a finding replicated, then it replicated in most samples. If a finding was not replicated, then it failed to replicate with little variation across samples and contexts. This evidence is inconsistent with a proposed explanation that failures to replicate in psychology are likely due to changes in the sample between the original and replication study. Results of a 2022 study suggest that many earlier brain–phenotype studies ("brain-wide association studies" (BWAS)) produced invalid conclusions as the replication of such studies requires samples from thousands of individuals due to small effect sizes. In medicine Of 49 medical studies from 1990 to 2003 with more than 1000 citations, 92% found that the studied therapies were effective. Of these studies, 16% were contradicted by subsequent studies, 16% had found stronger effects than did subsequent studies, 44% were replicated, and 24% remained largely unchallenged. A 2011 analysis by researchers with pharmaceutical company Bayer found that, at most, a quarter of Bayer's in-house findings replicated the original results. But the analysis of Bayer's results found that the results that did replicate could often be successfully used for clinical applications. In a 2012 paper, C. Glenn Begley, a biotech consultant working at Amgen, and Lee Ellis, a medical researcher at the University of Texas, found that only 11% of 53 pre-clinical cancer studies had replications that could confirm conclusions from the original studies. In late 2021, The Reproducibility Project: Cancer Biology examined 53 top papers about cancer published between 2010 and 2012 and showed that among studies that provided sufficient information to be redone, the effect sizes were 85% smaller on average than the original findings. A survey of cancer researchers found that half of them had been unable to reproduce a published result. Another report estimated that almost half of randomized controlled trials contained flawed data (based on the analysis of anonymized individual participant data (IPD) from more than 150 trials). In other disciplines In nutrition science In nutrition science, for most food ingredients, there were studies that found that the ingredient has an effect on cancer risk. Specifically, out of a random sample of 50 ingredients from a cookbook, 80% had articles reporting on their cancer risk. Statistical significance decreased for meta-analyses. In economics Economics has lagged behind other social sciences and psychology in its attempts to assess replication rates and increase the number of studies that attempt replication. A 2016 study in the journal Science replicated 18 experimental studies published in two leading economics journals, The American Economic Review and the Quarterly Journal of Economics, between 2011 and 2014. It found that about 39% failed to reproduce the original results. About 20% of studies published in The American Economic Review are contradicted by other studies despite relying on the same or similar data sets. A study of empirical findings in the Strategic Management Journal found that about 30% of 27 retested articles showed statistically insignificant results for previously significant findings, whereas about 4% showed statistically significant results for previously insignificant findings. In water resource management A 2019 study in Scientific Data estimated with 95% confidence that of 1,989 articles on water resources and management published in 2017, study results might be reproduced for only 0.6% to 6.8%, largely because the articles did not provide sufficient information to allow for replication. Across fields A 2016 survey by Nature on 1,576 researchers who took a brief online questionnaire on reproducibility found that more than 70% of researchers have tried and failed to reproduce another scientist's experiment results (including 87% of chemists, 77% of biologists, 69% of physicists and engineers, 67% of medical researchers, 64% of earth and environmental scientists, and 62% of all others), and more than half have failed to reproduce their own experiments. But fewer than 20% had been contacted by another researcher unable to reproduce their work. The survey found that fewer than 31% of researchers believe that failure to reproduce results means that the original result is probably wrong, although 52% agree that a significant replication crisis exists. Most researchers said they still trust the published literature. In 2010, Fanelli (2010) found that 91.5% of psychiatry/psychology studies confirmed the effects they were looking for, and concluded that the odds of this happening (a positive result) was around five times higher than in fields such as astronomy or geosciences. Fanelli argued that this is because researchers in "softer" sciences have fewer constraints to their conscious and unconscious biases. Early analysis of result-blind peer review, which is less affected by publication bias, has estimated that 61% of result-blind studies in biomedicine and psychology have led to null results, in contrast to an estimated 5% to 20% in earlier research. In 2021, a study conducted by University of California, San Diego found that papers that cannot be replicated are more likely to be cited. Nonreplicable publications are often cited more even after a replication study is published. Causes There are many proposed causes for the replication crisis. Historical and sociological causes The replication crisis may be triggered by the "generation of new data and scientific publications at an unprecedented rate" that leads to "desperation to publish or perish" and failure to adhere to good scientific practice. Predictions of an impending crisis in the quality-control mechanism of science can be traced back several decades. Derek de Solla Price—considered the father of scientometrics, the quantitative study of science—predicted in 1963 that science could reach "senility" as a result of its own exponential growth. Some present-day literature seems to vindicate this "overflow" prophecy, lamenting the decay in both attention and quality. Historian Philip Mirowski argues that the decline of scientific quality can be connected to its commodification, especially spurred by major corporations' profit-driven decision to outsource their research to universities and contract research organizations. Social systems theory, as expounded in the work of German sociologist Niklas Luhmann, inspires a similar diagnosis. This theory holds that each system, such as economy, science, religion, and media, communicates using its own code: true and false for science, profit and loss for the economy, news and no-news for the media, and so on. According to some sociologists, science's mediatization, commodification, and politicization, as a result of the structural coupling among systems, have led to a confusion of the original system codes. Problems with the publication system in science Publication bias A major cause of low reproducibility is the publication bias stemming from the fact that statistically non-significant results and seemingly unoriginal replications are rarely published. Only a very small proportion of academic journals in psychology and neurosciences explicitly welcomed submissions of replication studies in their aim and scope or instructions to authors. This does not encourage reporting on, or even attempts to perform, replication studies. Among 1,576 researchers Nature surveyed in 2016, only a minority had ever attempted to publish a replication, and several respondents who had published failed replications noted that editors and reviewers demanded that they play down comparisons with the original studies. An analysis of 4,270 empirical studies in 18 business journals from 1970 to 1991 reported that less than 10% of accounting, economics, and finance articles and 5% of management and marketing articles were replication studies. Publication bias is augmented by the pressure to publish and the author's own confirmation bias, and is an inherent hazard in the field, requiring a certain degree of skepticism on the part of readers. Publication bias leads to what psychologist Robert Rosenthal calls the "file drawer effect". The file drawer effect is the idea that as a consequence of the publication bias, a significant number of negative results are not published. According to philosopher of science Felipe Romero, this tends to produce "misleading literature and biased meta-analytic studies", and when publication bias is considered along with the fact that a majority of tested hypotheses might be false a priori, it is plausible that a considerable proportion of research findings might be false positives, as shown by metascientist John Ioannidis. In turn, a high proportion of false positives in the published literature can explain why many findings are nonreproducible. Another publication bias is that studies that do not reject the null hypothesis are scrutinized asymmetrically. For example, they are likely to be rejected as being difficult to interpret or having a Type II error. Studies that do reject the null hypothesis are not likely to be rejected for those reasons. In popular media, there is another element of publication bias: the desire to make research accessible to the public led to oversimplification and exaggeration of findings, creating unrealistic expectations and amplifying the impact of non-replications. In contrast, null results and failures to replicate tend to go unreported. This explanation may apply to power posing's replication crisis. Mathematical errors Even high-impact journals have a significant fraction of mathematical errors in their use of statistics. For example, 11% of statistical results published in Nature and BMJ in 2001 are "incongruent", meaning that the reported p-value is mathematically different from what it should be if it were correctly calculated from the reported test statistic. These errors were likely from typesetting, rounding, and transcription errors. Among 157 neuroscience papers published in five top-ranking journals that attempt to show that two experimental effects are different, 78 erroneously tested instead for whether one effect is significant while the other is not, and 79 correctly tested for whether their difference is significantly different from 0. "Publish or perish" culture The consequences for replicability of the publication bias are exacerbated by academia's "publish or perish" culture. As explained by metascientist Daniele Fanelli, "publish or perish" culture is a sociological aspect of academia whereby scientists work in an environment with very high pressure to have their work published in recognized journals. This is the consequence of the academic work environment being hypercompetitive and of bibliometric parameters (e.g., number of publications) being increasingly used to evaluate scientific careers. According to Fanelli, this pushes scientists to employ a number of strategies aimed at making results "publishable". In the context of publication bias, this can mean adopting behaviors aimed at making results positive or statistically significant, often at the expense of their validity (see QRPs, section 4.3). According to Center for Open Science founder Brian Nosek and his colleagues, "publish or perish" culture created a situation whereby the goals and values of single scientists (e.g., publishability) are not aligned with the general goals of science (e.g., pursuing scientific truth). This is detrimental to the validity of published findings. Philosopher Brian D. Earp and psychologist Jim A. C. Everett argue that, although replication is in the best interests of academics and researchers as a group, features of academic psychological culture discourage replication by individual researchers. They argue that performing replications can be time-consuming, and take away resources from projects that reflect the researcher's original thinking. They are harder to publish, largely because they are unoriginal, and even when they can be published they are unlikely to be viewed as major contributions to the field. Replications "bring less recognition and reward, including grant money, to their authors". In his 1971 book Scientific Knowledge and Its Social Problems, philosopher and historian of science Jerome R. Ravetz predicted that science—in its progression from "little" science composed of isolated communities of researchers to "big" science or "techno-science"—would suffer major problems in its internal system of quality control. He recognized that the incentive structure for modern scientists could become dysfunctional, creating perverse incentives to publish any findings, however dubious. According to Ravetz, quality in science is maintained only when there is a community of scholars, linked by a set of shared norms and standards, who are willing and able to hold each other accountable. Standards of reporting Certain publishing practices also make it difficult to conduct replications and to monitor the severity of the reproducibility crisis, for articles often come with insufficient descriptions for other scholars to reproduce the study. The Reproducibility Project: Cancer Biology showed that of 193 experiments from 53 top papers about cancer published between 2010 and 2012, only 50 experiments from 23 papers have authors who provided enough information for researchers to redo the studies, sometimes with modifications. None of the 193 papers examined had its experimental protocols fully described and replicating 70% of experiments required asking for key reagents. The aforementioned study of empirical findings in the Strategic Management Journal found that 70% of 88 articles could not be replicated due to a lack of sufficient information for data or procedures. In water resources and management, most of 1,987 articles published in 2017 were not replicable because of a lack of available information shared online. In studies of event-related potentials, only two-thirds the information needed to replicate a study were reported in a sample of 150 studies, highlighting that there are substantial gaps in reporting. Procedural bias By the Duhem-Quine thesis, scientific results are interpreted by both a substantive theory and a theory of instruments. For example, astronomical observations depend both on the theory of astronomical objects and the theory of telescopes. A large amount of non-replicable research might accumulate if there is a bias of the following kind: faced with a null result, a scientist prefers to treat the data as saying the instrument is insufficient; faced with a non-null result, a scientist prefers to accept the instrument as good, and treat the data as saying something about the substantive theory. Cultural evolution Smaldino and McElreath proposed a simple model for the cultural evolution of scientific practice. Each lab randomly decides to produce novel research or replication research, at different fixed levels of false positive rate, true positive rate, replication rate, and productivity (its "traits"). A lab might use more "effort", making the ROC curve more convex but decreasing productivity. A lab accumulates a score over its lifetime that increases with publications and decreases when another lab fails to replicate its results. At regular intervals, a random lab "dies" and another "reproduces" a child lab with a similar trait as its parent. Labs with higher scores are more likely to reproduce. Under certain parameter settings, the population of labs converge to maximum productivity even at the price of very high false positive rates. Questionable research practices and fraud Questionable research practices (QRPs) are intentional behaviors that capitalize on the gray area of acceptable scientific behavior or exploit the researcher degrees of freedom (researcher DF), which can contribute to the irreproducibility of results by increasing the probability of false positive results. Researcher DF are seen in hypothesis formulation, design of experiments, data collection and analysis, and reporting of research. But in many analyst studies involving several researchers or research teams analyzing the same data, analysts obtain different and sometimes conflicting results, even without incentives to report statistically significant findings. This is because research design and data analysis entail numerous decisions that are not sufficiently constrained by a field’s best practices and statistical methodologies. As a result, researcher DF can lead to situations where some failed replication attempts use a different, yet plausible, research design or statistical analysis; such studies do not necessarily undermine previous findings. Multiverse analysis, a method that makes inferences based on all plausible data-processing pipelines, provides a solution to the problem of analytical flexibility. Instead, estimating many statistical models (known as data dredging), selective reporting only statistically significant findings, and HARKing (hypothesizing after results are known) are examples of questionable research practices. In medicine, irreproducible studies have six features in common: investigators not being blinded to the experimental versus the control arms; failure to repeat experiments; lack of positive and negative controls; failing to report all the data; inappropriate use of statistical tests; and use of reagents that were not appropriately validated. QRPs do not include more explicit violations of scientific integrity, such as data falsification. Fraudulent research does occur, as in the case of scientific fraud by social psychologist Diederik Stapel, cognitive psychologist Marc Hauser and social psychologist Lawrence Sanna, but it appears to be uncommon. Prevalence According to IU professor Ernest O’Boyle and psychologist Martin Götz, around 50% of researchers surveyed across various studies admitted engaging in HARKing. In a survey of 2,000 psychologists by behavioral scientist Leslie K. John and colleagues, around 94% of psychologists admitted having employed at least one QRP. More specifically, 63% admitted failing to report all of a study's dependent measures, 28% to report all of a study's conditions, and 46% to selectively reporting studies that produced the desired pattern of results. In addition, 56% admitted having collected more data after having inspected already collected data, and 16% to having stopped data collection because the desired result was already visible. According to biotechnology researcher J. Leslie Glick's estimate in 1992, 10% to 20% of research and development studies involved either QRPs or outright fraud. The methodology used to estimate QRPs has been contested, and more recent studies suggested lower prevalence rates on average. A 2009 meta-analysis found that 2% of scientists across fields admitted falsifying studies at least once and 14% admitted knowing someone who did. Such misconduct was, according to one study, reported more frequently by medical researchers than by others. Statistical issues Low statistical power According to Deakin University professor Tom Stanley and colleagues, one plausible reason studies fail to replicate is low statistical power. This happens for three reasons. First, a replication study with low power is unlikely to succeed since, by definition, it has a low probability to detect a true effect. Second, if the original study has low power, it will yield biased effect size estimates. When conducting a priori power analysis for the replication study, this will result in underestimation of the required sample size. Third, if the original study has low power, the post-study odds of a statistically significant finding reflecting a true effect are quite low. It is therefore likely that a replication attempt of the original study would fail. Mathematically, the probability of replicating a previous publication that rejected a null hypothesis in favor of an alternative is assuming significance is less than power. Thus, low power implies low probability of replication, regardless of how the previous publication was designed, and regardless of which hypothesis is really true. Stanley and colleagues estimated the average statistical power of psychological literature by analyzing data from 200 meta-analyses. They found that on average, psychology studies have between 33.1% and 36.4% statistical power. These values are quite low compared to the 80% considered adequate statistical power for an experiment. Across the 200 meta-analyses, the median of studies with adequate statistical power was between 7.7% and 9.1%, implying that a positive result would replicate with probability less than 10%, regardless of whether the positive result was a true positive or a false positive. The statistical power of neuroscience studies is quite low. The estimated statistical power of fMRI research is between .08 and .31, and that of studies of event-related potentials was estimated as .72‒.98 for large effect sizes, .35‒.73 for medium effects, and .10‒.18 for small effects. In a study published in Nature, psychologist Katherine Button and colleagues conducted a similar study with 49 meta-analyses in neuroscience, estimating a median statistical power of 21%. Meta-scientist John Ioannidis and colleagues computed an estimate of average power for empirical economic research, finding a median power of 18% based on literature drawing upon 6.700 studies. In light of these results, it is plausible that a major reason for widespread failures to replicate in several scientific fields might be very low statistical power on average. The same statistical test with the same significance level will have lower statistical power if the effect size is small under the alternative hypothesis. Complex inheritable traits are typically correlated with a large number of genes, each of small effect size, so high power requires a large sample size. In particular, many results from the candidate gene literature suffered from small effect sizes and small sample sizes and would not replicate. More data from genome-wide association studies (GWAS) come close to solving this problem. As a numeric example, most genes associated with schizophrenia risk have low effect size (genotypic relative risk, GRR). A statistical study with 1000 cases and 1000 controls has 0.03% power for a gene with GRR = 1.15, which is already large for schizophrenia. In contrast, the largest GWAS to date has ~100% power for it. Positive effect size bias Even when the study replicates, the replication typically have smaller effect size. Underpowered studies have a large effect size bias. In studies that statistically estimate a regression factor, such as the in , when the dataset is large, noise tends to cause the regression factor to be underestimated, but when the dataset is small, noise tends to cause the regression factor to be overestimated. Problems of meta-analysis Meta-analyses have their own methodological problems and disputes, which leads to rejection of the meta-analytic method by researchers whose theory is challenged by meta-analysis. Rosenthal proposed the "fail-safe number" (FSN) to avoid the publication bias against null results. It is defined as follows: Suppose the null hypothesis is true; how many publications would be required to make the current result indistinguishable from the null hypothesis? Rosenthal's point is that certain effect sizes are large enough, such that even if there is a total publication bias against null results (the "file drawer problem"), the number of unpublished null results would be impossibly large to swamp out the effect size. Thus, the effect size must be statistically significant even after accounting for unpublished null results. One objection to the FSN is that it is calculated as if unpublished results are unbiased samples from the null hypothesis. But if the file drawer problem is true, then unpublished results would have effect sizes concentrated around 0. Thus fewer unpublished null results would be necessary to swap out the effect size, and so the FSN is an overestimate. Another problem with meta-analysis is that bad studies are "infectious" in the sense that one bad study might cause the entire meta-analysis to overestimate statistical significance. P-hacking Various statistical methods can be applied to make the p-value appear smaller than it really is. This need not be malicious, as moderately flexible data analysis, routine in research, can increase the false-positive rate to above 60%. For example, if one collects some data, applies several different significance tests to it, and publishes only the one that happens to have a p-value less than 0.05, then the total p-value for "at least one significance test reaches p < 0.05" can be much larger than 0.05, because even if the null hypothesis were true, the probability that one out of many significance tests is extreme is not itself extreme. Typically, a statistical study has multiple steps, with several choices at each step, such as during data collection, outlier rejection, choice of test statistic, choice of one-tailed or two-tailed test, etc. These choices in the "garden of forking paths" multiply, creating many "researcher degrees of freedom". The effect is similar to the file-drawer problem, as the paths not taken are not published. Consider a simple illustration. Suppose the null hypothesis is true, and we have 20 possible significance tests to apply to the dataset. Also suppose the outcomes to the significance tests are independent. By definition of "significance", each test has probability 0.05 to pass with significance level 0.05. The probability that at least 1 out of 20 is significant is, by assumption of independence, . Another possibility is the multiple comparisons problem. In 2009, it was twice noted that fMRI studies had a suspicious number of positive results with large effect sizes, more than would be expected since the studies have low power (one example had only 13 subjects). It pointed out that over half of the studies would test for correlation between a phenomenon and individual fMRI voxels, and only report on voxels exceeding chosen thresholds. Optional stopping is a practice where one collects data until some stopping criterion is reached. Though a valid procedure, it is easily misused. The problem is that p-value of an optionally stopped statistical test is larger than it seems. Intuitively, this is because the p-value is supposed to be the sum of all events at least as rare as what is observed. With optional stopping, there are even rarer events that are difficult to account for, i.e. not triggering the optional stopping rule, and collecting even more data before stopping. Neglecting these events leads to a p-value that is too low. In fact, if the null hypothesis is true, any significance level can be reached if one is allowed to keep collecting data and stop when the desired p-value (calculated as if one has always been planning to collect exactly this much data) is obtained. For a concrete example of testing for a fair coin, see p-value#optional stopping. More succinctly, the proper calculation of p-value requires accounting for counterfactuals, that is, what the experimenter could have done in reaction to data that might have been. Accounting for what might have been is hard even for honest researchers. One benefit of preregistration is to account for all counterfactuals, allowing the p-value to be calculated correctly. The problem of early stopping is not just limited to researcher misconduct. There is often pressure to stop early if the cost of collecting data is high. Some animal ethics boards even mandate early stopping if the study obtains a significant result midway. Such practices are widespread in psychology. In a 2012 survey, 56% of psychologists admitted to early stopping, 46% to only reporting analyses that "worked", and 38% to post hoc exclusion, that is, removing some data after analysis was already performed on the data before reanalyzing the remaining data (often on the premise of "outlier removal"). Statistical heterogeneity As also reported by Stanley and colleagues, a further reason studies might fail to replicate is high heterogeneity of the to-be-replicated effects. In meta-analysis, "heterogeneity" refers to the variance in research findings that results from there being no single true effect size. Instead, findings in such cases are better seen as a distribution of true effects. Statistical heterogeneity is calculated using the I-squared statistic, defined as "the proportion (or percentage) of observed variation among reported effect sizes that cannot be explained by the calculated standard errors associated with these reported effect sizes". This variation can be due to differences in experimental methods, populations, cohorts, and statistical methods between replication studies. Heterogeneity poses a challenge to studies attempting to replicate previously found effect sizes. When heterogeneity is high, subsequent replications have a high probability of finding an effect size radically different than that of the original study. Importantly, significant levels of heterogeneity are also found in direct/exact replications of a study. Stanley and colleagues discuss this while reporting a study by quantitative behavioral scientist Richard Klein and colleagues, where the authors attempted to replicate 15 psychological effects across 36 different sites in Europe and the U.S. In the study, Klein and colleagues found significant amounts of heterogeneity in 8 out of 16 effects (I-squared = 23% to 91%). Importantly, while the replication sites intentionally differed on a variety of characteristics, such differences could account for very little heterogeneity . According to Stanley and colleagues, this suggested that heterogeneity could have been a genuine characteristic of the phenomena being investigated. For instance, phenomena might be influenced by so-called "hidden moderators" – relevant factors that were previously not understood to be important in the production of a certain effect. In their analysis of 200 meta-analyses of psychological effects, Stanley and colleagues found a median percent of heterogeneity of I-squared = 74%. According to the authors, this level of heterogeneity can be considered "huge". It is three times larger than the random sampling variance of effect sizes measured in their study. If considered along sampling error, heterogeneity yields a standard deviation from one study to the next even larger than the median effect size of the 200 meta-analyses they investigated. The authors conclude that if replication is defined by a subsequent study finding a sufficiently similar effect size to the original, replication success is not likely even if replications have very large sample sizes. Importantly, this occurs even if replications are direct or exact since heterogeneity nonetheless remains relatively high in these cases. Others Within economics, the replication crisis may be also exacerbated because econometric results are fragile: using different but plausible estimation procedures or data preprocessing techniques can lead to conflicting results. Context sensitivity New York University professor Jay Van Bavel and colleagues argue that a further reason findings are difficult to replicate is the sensitivity to context of certain psychological effects. On this view, failures to replicate might be explained by contextual differences between the original experiment and the replication, often called "hidden moderators". Van Bavel and colleagues tested the influence of context sensitivity by reanalyzing the data of the widely cited Reproducibility Project carried out by the Open Science Collaboration. They re-coded effects according to their sensitivity to contextual factors and then tested the relationship between context sensitivity and replication success in various regression models. Context sensitivity was found to negatively correlate with replication success, such that higher ratings of context sensitivity were associated with lower probabilities of replicating an effect. Importantly, context sensitivity significantly correlated with replication success even when adjusting for other factors considered important for reproducing results (e.g., effect size and sample size of original, statistical power of the replication, methodological similarity between original and replication). In light of the results, the authors concluded that attempting a replication in a different time, place or with a different sample can significantly alter an experiment's results. Context sensitivity thus may be a reason certain effects fail to replicate in psychology. Bayesian explanation In the framework of Bayesian probability, by Bayes' theorem, rejecting the null hypothesis at significance level 5% does not mean that the posterior probability for the alternative hypothesis is 95%, and the posterior probability is also different from the probability of replication. Consider a simplified case where there are only two hypotheses. Let the prior probability of the null hypothesis be , and the alternative . For a given statistical study, let its false positive rate (significance level) be , and true positive rate (power) be . For illustrative purposes, let significance level be 0.05 and power be 0.45 (underpowered). Now, by Bayes' theorem, conditional on the statistical studying finding to be true, the posterior probability of actually being true is not , but and the probability of replicating the statistical study is which is also different from . In particular, for a fixed level of significance, the probability of replication increases with power, and prior probability for . If the prior probability for is small, then one would require a high power for replication. For example, if the prior probability of the null hypothesis is , and the study found a positive result, then the posterior probability for is , and the replication probability is . Problem with null hypothesis testing Some argue that null hypothesis testing is itself inappropriate, especially in "soft sciences" like social psychology. As repeatedly observed by statisticians, in complex systems, such as social psychology, "the null hypothesis is always false", or "everything is correlated". If so, then if the null hypothesis is not rejected, that does not show that the null hypothesis is true, but merely that it was a false negative, typically due to low power. Low power is especially prevalent in subject areas where effect sizes are small and data is expensive to acquire, such as social psychology. Furthermore, when the null hypothesis is rejected, it might not be evidence for the substantial alternative hypothesis. In soft sciences, many hypotheses can predict a correlation between two variables. Thus, evidence against the null hypothesis "there is no correlation" is no evidence for one of the many alternative hypotheses that equally well predict "there is a correlation". Fisher developed the NHST for agronomy, where rejecting the null hypothesis is usually good proof of the alternative hypothesis, since there are not many of them. Rejecting the hypothesis "fertilizer does not help" is evidence for "fertilizer helps". But in psychology, there are many alternative hypotheses for every null hypothesis. In particular, when statistical studies on extrasensory perception reject the null hypothesis at extremely low p-value (as in the case of Daryl Bem), it does not imply the alternative hypothesis "ESP exists". Far more likely is that there was a small (non-ESP) signal in the experiment setup that has been measured precisely. Paul Meehl noted that statistical hypothesis testing is used differently in "soft" psychology (personality, social, etc.) from physics. In physics, a theory makes a quantitative prediction and is tested by checking whether the prediction falls within the statistically measured interval. In soft psychology, a theory makes a directional prediction and is tested by checking whether the null hypothesis is rejected in the right direction. Consequently, improved experimental technique makes theories more likely to be falsified in physics but less likely to be falsified in soft psychology, as the null hypothesis is always false since any two variables are correlated by a "crud factor" of about 0.30. The net effect is an accumulation of theories that remain unfalsified, but with no empirical evidence for preferring one over the others. Base rate fallacy According to philosopher Alexander Bird, a possible reason for the low rates of replicability in certain scientific fields is that a majority of tested hypotheses are false a priori. On this view, low rates of replicability could be consistent with quality science. Relatedly, the expectation that most findings should replicate would be misguided and, according to Bird, a form of base rate fallacy. Bird's argument works as follows. Assuming an ideal situation of a test of significance, whereby the probability of incorrectly rejecting the null hypothesis is 5% (i.e. Type I error) and the probability of correctly rejecting the null hypothesis is 80% (i.e. Power), in a context where a high proportion of tested hypotheses are false, it is conceivable that the number of false positives would be high compared to those of true positives. For example, in a situation where only 10% of tested hypotheses are actually true, one can calculate that as many as 36% of results will be false positives. The claim that the falsity of most tested hypotheses can explain low rates of replicability is even more relevant when considering that the average power for statistical tests in certain fields might be much lower than 80%. For example, the proportion of false positives increases to a value between 55.2% and 57.6% when calculated with the estimates of an average power between 34.1% and 36.4% for psychology studies, as provided by Stanley and colleagues in their analysis of 200 meta-analyses in the field. A high proportion of false positives would then result in many research findings being non-replicable. Bird notes that the claim that a majority of tested hypotheses are false a priori in certain scientific fields might be plausible given factors such as the complexity of the phenomena under investigation, the fact that theories are seldom undisputed, the "inferential distance" between theories and hypotheses, and the ease with which hypotheses can be generated. In this respect, the fields Bird takes as examples are clinical medicine, genetic and molecular epidemiology, and social psychology. This situation is radically different in fields where theories have outstanding empirical basis and hypotheses can be easily derived from theories (e.g., experimental physics). Consequences When effects are wrongly stated as relevant in the literature, failure to detect this by replication will lead to the canonization of such false facts. A 2021 study found that papers in leading general interest, psychology and economics journals with findings that could not be replicated tend to be cited more over time than reproducible research papers, likely because these results are surprising or interesting. The trend is not affected by publication of failed reproductions, after which only 12% of papers that cite the original research will mention the failed replication. Further, experts are able to predict which studies will be replicable, leading the authors of the 2021 study, Marta Serra-Garcia and Uri Gneezy, to conclude that experts apply lower standards to interesting results when deciding whether to publish them. Public awareness and perceptions Concerns have been expressed within the scientific community that the general public may consider science less credible due to failed replications. Research supporting this concern is sparse, but a nationally representative survey in Germany showed that more than 75% of Germans have not heard of replication failures in science. The study also found that most Germans have positive perceptions of replication efforts: only 18% think that non-replicability shows that science cannot be trusted, while 65% think that replication research shows that science applies quality control, and 80% agree that errors and corrections are part of science. Response in academia With the replication crisis of psychology earning attention, Princeton University psychologist Susan Fiske drew controversy for speaking against critics of psychology for what she called bullying and undermining the science. She called these unidentified "adversaries" names such as "methodological terrorist" and "self-appointed data police", saying that criticism of psychology should be expressed only in private or by contacting the journals. Columbia University statistician and political scientist Andrew Gelman responded to Fiske, saying that she had found herself willing to tolerate the "dead paradigm" of faulty statistics and had refused to retract publications even when errors were pointed out. He added that her tenure as editor had been abysmal and that a number of published papers she edited were found to be based on extremely weak statistics; one of Fiske's own published papers had a major statistical error and "impossible" conclusions. Credibility revolution Some researchers in psychology indicate that the replication crisis is a foundation for a "credibility revolution", where changes in standards by which psychological science are evaluated may include emphasizing transparency and openness, preregistering research projects, and replicating research with higher standards for evidence to improve the strength of scientific claims. Such changes may diminish the productivity of individual researchers, but this effect could be avoided by data sharing and greater collaboration. A credibility revolution could be good for the research environment. Remedies Focus on the replication crisis has led to renewed efforts in psychology to retest important findings. A 2013 special edition of the journal Social Psychology focused on replication studies. Standardization as well as (requiring) transparency of the used statistical and experimental methods have been proposed. Careful documentation of the experimental set-up is considered crucial for replicability of experiments and various variables may not be documented and standardized such as animals' diets in animal studies. A 2016 article by John Ioannidis elaborated on "Why Most Clinical Research Is Not Useful". Ioannidis describes what he views as some of the problems and calls for reform, characterizing certain points for medical research to be useful again; one example he makes is the need for medicine to be patient-centered (e.g. in the form of the Patient-Centered Outcomes Research Institute) instead of the current practice to mainly take care of "the needs of physicians, investigators, or sponsors". Reform in scientific publishing Metascience Metascience is the use of scientific methodology to study science itself. It seeks to increase the quality of scientific research while reducing waste. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and where improvements can be made. Metascience is concerned with all fields of research and has been called "a bird's eye view of science." In Ioannidis's words, "Science is the best thing that has happened to human beings ... but we can do it better." Meta-research continues to be conducted to identify the roots of the crisis and to address them. Methods of addressing the crisis include pre-registration of scientific studies and clinical trials as well as the founding of organizations such as CONSORT and the EQUATOR Network that issue guidelines for methodology and reporting. Efforts continue to reform the system of academic incentives, improve the peer review process, reduce the misuse of statistics, combat bias in scientific literature, and increase the overall quality and efficiency of the scientific process. Presentation of methodology Some authors have argued that the insufficient communication of experimental methods is a major contributor to the reproducibility crisis and that better reporting of experimental design and statistical analyses would improve the situation. These authors tend to plead for both a broad cultural change in the scientific community of how statistics are considered and a more coercive push from scientific journals and funding bodies. But concerns have been raised about the potential for standards for transparency and replication to be misapplied to qualitative as well as quantitative studies. Business and management journals that have introduced editorial policies on data accessibility, replication, and transparency include the Strategic Management Journal, the Journal of International Business Studies, and the Management and Organization Review. Result-blind peer review In response to concerns in psychology about publication bias and data dredging, more than 140 psychology journals have adopted result-blind peer review. In this approach, studies are accepted not on the basis of their findings and after the studies are completed, but before they are conducted and on the basis of the methodological rigor of their experimental designs, and the theoretical justifications for their statistical analysis techniques before data collection or analysis is done. Early analysis of this procedure has estimated that 61% of result-blind studies have led to null results, in contrast to an estimated 5% to 20% in earlier research. In addition, large-scale collaborations between researchers working in multiple labs in different countries that regularly make their data openly available for different researchers to assess have become much more common in psychology. Pre-registration of studies Scientific publishing has begun using pre-registration reports to address the replication crisis. The registered report format requires authors to submit a description of the study methods and analyses prior to data collection. Once the method and analysis plan is vetted through peer-review, publication of the findings is provisionally guaranteed, based on whether the authors follow the proposed protocol. One goal of registered reports is to circumvent the publication bias toward significant findings that can lead to implementation of questionable research practices. Another is to encourage publication of studies with rigorous methods. The journal Psychological Science has encouraged the preregistration of studies and the reporting of effect sizes and confidence intervals. The editor in chief also noted that the editorial staff will be asking for replication of studies with surprising findings from examinations using small sample sizes before allowing the manuscripts to be published. Metadata and digital tools for tracking replications It has been suggested that "a simple way to check how often studies have been repeated, and whether or not the original findings are confirmed" is needed. Categorizations and ratings of reproducibility at the study or results level, as well as addition of links to and rating of third-party confirmations, could be conducted by the peer-reviewers, the scientific journal, or by readers in combination with novel digital platforms or tools. Statistical reform Requiring smaller p-values Many publications require a p-value of p < 0.05 to claim statistical significance. The paper "Redefine statistical significance", signed by a large number of scientists and mathematicians, proposes that in "fields where the threshold for defining statistical significance for new discoveries is p < 0.05, we propose a change to p < 0.005. This simple step would immediately improve the reproducibility of scientific research in many fields." Their rationale is that "a leading cause of non-reproducibility (is that the) statistical standards of evidence for claiming new discoveries in many fields of science are simply too low. Associating 'statistically significant' findings with p < 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems." This call was subsequently criticised by another large group, who argued that "redefining" the threshold would not fix current problems, would lead to some new ones, and that in the end, all thresholds needed to be justified case-by-case instead of following general conventions. A 2022 followup study examined these competing recommendations' practical impact. Despite high citation rates of both proposals, researchers found limited implementation of either the p < 0.005 threshold or the case-by-case justification approach in practice. This revealed what the authors called a "vicious cycle", in which scientists reject recommendations because they are not standard practice, while the recommendations fail to become standard practice because few scientists adopt them. Addressing misinterpretation of p-values Although statisticians are unanimous that the use of "p < 0.05" as a standard for significance provides weaker evidence than is generally appreciated, there is a lack of unanimity about what should be done about it. Some have advocated that Bayesian methods should replace p-values. This has not happened on a wide scale, partly because it is complicated and partly because many users distrust the specification of prior distributions in the absence of hard data. A simplified version of the Bayesian argument, based on testing a point null hypothesis was suggested by pharmacologist David Colquhoun. The logical problems of inductive inference were discussed in "The Problem with p-values" (2016). The hazards of reliance on p-values arises partly because even an observation of p = 0.001 is not necessarily strong evidence against the null hypothesis. Despite the fact that the likelihood ratio in favor of the alternative hypothesis over the null is close to 100, if the hypothesis was implausible, with a prior probability of a real effect being 0.1, even the observation of p = 0.001 would have a false positive risk of 8 percent. It would still fail to reach the 5 percent level. It was recommended that the terms "significant" and "non-significant" should not be used. p-values and confidence intervals should still be specified, but they should be accompanied by an indication of the false-positive risk. It was suggested that the best way to do this is to calculate the prior probability that would be necessary to believe in order to achieve a false positive risk of a certain level, such as 5%. The calculations can be done with various computer software. This reverse Bayesian approach, which physicist Robert Matthews suggested in 2001, is one way to avoid the problem that the prior probability is rarely known. Encouraging larger sample sizes To improve the quality of replications, larger sample sizes than those used in the original study are often needed. Larger sample sizes are needed because estimates of effect sizes in published work are often exaggerated due to publication bias and large sampling variability associated with small sample sizes in an original study. Further, using significance thresholds usually leads to inflated effects, because particularly with small sample sizes, only the largest effects will become significant. Cross-validation One common statistical problem is overfitting, that is, when researchers fit a regression model over a large number of variables but a small number of data points. For example, a typical fMRI study of emotion, personality, and social cognition has fewer than 100 subjects, but each subject has 10,000 voxels. The study would fit a sparse linear regression model that uses the voxels to predict a variable of interest, such as self-reported stress. But the study would then report on the p-value of the model on the same data it was fitted to. The standard approach in statistics, where data is split into a training and a validation set, is resisted because test subjects are expensive to acquire. One possible solution is cross-validation, which allows model validation while also allowing the whole dataset to be used for model-fitting. Replication efforts Funding In July 2016, the Netherlands Organisation for Scientific Research made €3 million available for replication studies. The funding is for replication based on reanalysis of existing data and replication by collecting and analysing new data. Funding is available in the areas of social sciences, health research and healthcare innovation. In 2013, the Laura and John Arnold Foundation funded the launch of The Center for Open Science with a $5.25 million grant. By 2017, it provided an additional $10 million in funding. It also funded the launch of the Meta-Research Innovation Center at Stanford at Stanford University run by Ioannidis and medical scientist Steven Goodman to study ways to improve scientific research. It also provided funding for the AllTrials initiative led in part by medical scientist Ben Goldacre. Emphasis in post-secondary education Based on coursework in experimental methods at MIT, Stanford, and the University of Washington, it has been suggested that methods courses in psychology and other fields should emphasize replication attempts rather than original studies. Such an approach would help students learn scientific methodology and provide numerous independent replications of meaningful scientific findings that would test the replicability of scientific findings. Some have recommended that graduate students should be required to publish a high-quality replication attempt on a topic related to their doctoral research prior to graduation. Replication database There has been a concern that replication attempts have been growing. As a result, this may lead to lead to research waste. In turn, this has led to a need to systematically track replication attempts. As a result, several databases have been created (e.g.). The databases have created a Replication Database that includes psychology and speech-language therapy, among other disciplines, to promote theory-driven research and optimize the use of academic and institutional resource, while promoting trust in science. Final year thesis Some institutions require undergraduate students to submit a final year thesis that consists of an original piece of research. Daniel Quintana, a psychologist at the University of Oslo in Norway, has recommended that students should be encouraged to perform replication studies in thesis projects, as well as being taught about open science. Semi-automated Researchers demonstrated a way of semi-automated testing for reproducibility: statements about experimental results were extracted from, as of 2022 non-semantic, gene expression cancer research papers and subsequently reproduced via robot scientist "Eve". Problems of this approach include that it may not be feasible for many areas of research and that sufficient experimental data may not get extracted from some or many papers even if available. Involving original authors Psychologist Daniel Kahneman argued that, in psychology, the original authors should be involved in the replication effort because the published methods are often too vague. Others, such as psychologist Andrew Wilson, disagree, arguing that the original authors should write down the methods in detail. An investigation of replication rates in psychology in 2012 indicated higher success rates of replication in replication studies when there was author overlap with the original authors of a study (91.7% successful replication rates in studies with author overlap compared to 64.6% successful replication rates without author overlap). Big team science The replication crisis has led to the formation and development of various large-scale and collaborative communities to pool their resources to address a single question across cultures, countries and disciplines. The focus is on replication, to ensure that the effect generalizes beyond a specific culture and investigate whether the effect is replicable and genuine. This allows interdisciplinary internal reviews, multiple perspectives, uniform protocols across labs, and recruiting larger and more diverse samples. Researchers can collaborate by coordinating data collection or fund data collection by researchers who may not have access to the funds, allowing larger sample sizes and increasing the robustness of the conclusions. Broader changes to scientific approach Emphasize triangulation, not just replication Psychologist Marcus R. Munafò and Epidemiologist George Davey Smith argue, in a piece published by Nature, that research should emphasize triangulation, not just replication, to protect against flawed ideas. They claim that, Complex systems paradigm The dominant scientific and statistical model of causation is the linear model. The linear model assumes that mental variables are stable properties which are independent of each other. In other words, these variables are not expected to influence each other. Instead, the model assumes that the variables will have an independent, linear effect on observable outcomes. Social scientists Sebastian Wallot and Damian Kelty-Stephen argue that the linear model is not always appropriate. An alternative is the complex system model which assumes that mental variables are interdependent. These variables are not assumed to be stable, rather they will interact and adapt to each specific context. They argue that the complex system model is often more appropriate in psychology, and that the use of the linear model when the complex system model is more appropriate will result in failed replications. Replication should seek to revise theories Replication is fundamental for scientific progress to confirm original findings. However, replication alone is not sufficient to resolve the replication crisis. Replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. This approach therefore involves pruning existing theories, comparing all the alternative theories, and making replication efforts more generative and engaged in theory-building. However, replication alone is not enough, it is important to assess the extent that results generalise across geographical, historical and social contexts is important for several scientific fields, especially practitioners and policy makers to make analyses in order to guide important strategic decisions. Reproducible and replicable findings was the best predictor of generalisability beyond historical and geographical contexts, indicating that for social sciences, results from a certain time period and place can meaningfully drive as to what is universally present in individuals. Open science Open data, open source software and open source hardware all are critical to enabling reproducibility in the sense of validation of the original data analysis. The use of proprietary software, the lack of the publication of analysis software and the lack of open data prevents the replication of studies. Unless software used in research is open source, reproducing results with different software and hardware configurations is impossible. CERN has both Open Data and CERN Analysis Preservation projects for storing data, all relevant information, and all software and tools needed to preserve an analysis at the large experiments of the LHC. Aside from all software and data, preserved analysis assets include metadata that enable understanding of the analysis workflow, related software, systematic uncertainties, statistics procedures and meaningful ways to search for the analysis, as well as references to publications and to backup material. CERN software is open source and available for use outside of particle physics and there is some guidance provided to other fields on the broad approaches and strategies used for open science in contemporary particle physics. Online repositories where data, protocols, and findings can be stored and evaluated by the public seek to improve the integrity and reproducibility of research. Examples of such repositories include the Open Science Framework, Registry of Research Data Repositories, and Psychfiledrawer.org. Sites like Open Science Framework offer badges for using open science practices in an effort to incentivize scientists. However, there have been concerns that those who are most likely to provide their data and code for analyses are the researchers that are likely the most sophisticated. Ioannidis suggested that "the paradox may arise that the most meticulous and sophisticated and method-savvy and careful researchers may become more susceptible to criticism and reputation attacks by reanalyzers who hunt for errors, no matter how negligible these errors are".
Physical sciences
Science basics
Basics and measurement
36132944
https://en.wikipedia.org/wiki/Motion%20%28geometry%29
Motion (geometry)
In geometry, a motion is an isometry of a metric space. For instance, a plane equipped with the Euclidean distance metric is a metric space in which a mapping associating congruent figures is a motion. More generally, the term motion is a synonym for surjective isometry in metric geometry, including elliptic geometry and hyperbolic geometry. In the latter case, hyperbolic motions provide an approach to the subject for beginners. Motions can be divided into direct and indirect motions. Direct, proper or rigid motions are motions like translations and rotations that preserve the orientation of a chiral shape. Indirect, or improper motions are motions like reflections, glide reflections and Improper rotations that invert the orientation of a chiral shape. Some geometers define motion in such a way that only direct motions are motions. In differential geometry In differential geometry, a diffeomorphism is called a motion if it induces an isometry between the tangent space at a manifold point and the tangent space at the image of that point. Group of motions Given a geometry, the set of motions forms a group under composition of mappings. This group of motions is noted for its properties. For example, the Euclidean group is noted for the normal subgroup of translations. In the plane, a direct Euclidean motion is either a translation or a rotation, while in space every direct Euclidean motion may be expressed as a screw displacement according to Chasles' theorem. When the underlying space is a Riemannian manifold, the group of motions is a Lie group. Furthermore, the manifold has constant curvature if and only if, for every pair of points and every isometry, there is a motion taking one point to the other for which the motion induces the isometry. The idea of a group of motions for special relativity has been advanced as Lorentzian motions. For example, fundamental ideas were laid out for a plane characterized by the quadratic form in American Mathematical Monthly. The motions of Minkowski space were described by Sergei Novikov in 2006: The physical principle of constant velocity of light is expressed by the requirement that the change from one inertial frame to another is determined by a motion of Minkowski space, i.e. by a transformation preserving space-time intervals. This means that for each pair of points x and y in R1,3. History An early appreciation of the role of motion in geometry was given by Alhazen (965 to 1039). His work "Space and its Nature" uses comparisons of the dimensions of a mobile body to quantify the vacuum of imaginary space. He was criticised by Omar Khayyam who pointed that Aristotle had condemned the use of motion in geometry. In the 19th century Felix Klein became a proponent of group theory as a means to classify geometries according to their "groups of motions". He proposed using symmetry groups in his Erlangen program, a suggestion that was widely adopted. He noted that every Euclidean congruence is an affine mapping, and each of these is a projective transformation; therefore the group of projectivities contains the group of affine maps, which in turn contains the group of Euclidean congruences. The term motion, shorter than transformation, puts more emphasis on the adjectives: projective, affine, Euclidean. The context was thus expanded, so much that "In topology, the allowed movements are continuous invertible deformations that might be called elastic motions." The science of kinematics is dedicated to rendering physical motion into expression as mathematical transformation. Frequently the transformation can be written using vector algebra and linear mapping. A simple example is a turn written as a complex number multiplication: where . Rotation in space is achieved by use of quaternions, and Lorentz transformations of spacetime by use of biquaternions. Early in the 20th century, hypercomplex number systems were examined. Later their automorphism groups led to exceptional groups such as G2. In the 1890s logicians were reducing the primitive notions of synthetic geometry to an absolute minimum. Giuseppe Peano and Mario Pieri used the expression motion for the congruence of point pairs. Alessandro Padoa celebrated the reduction of primitive notions to merely point and motion in his report to the 1900 International Congress of Philosophy. It was at this congress that Bertrand Russell was exposed to continental logic through Peano. In his book Principles of Mathematics (1903), Russell considered a motion to be a Euclidean isometry that preserves orientation. In 1914 D. M. Y. Sommerville used the idea of a geometric motion to establish the idea of distance in hyperbolic geometry when he wrote Elements of Non-Euclidean Geometry. He explains: By a motion or displacement in the general sense is not meant a change of position of a single point or any bounded figure, but a displacement of the whole space, or, if we are dealing with only two dimensions, of the whole plane. A motion is a transformation which changes each point P into another point P ′ in such a way that distances and angles are unchanged. Axioms of motion László Rédei gives as axioms of motion: Any motion is a one-to-one mapping of space R onto itself such that every three points on a line will be transformed into (three) points on a line. The identical mapping of space R is a motion. The product of two motions is a motion. The inverse mapping of a motion is a motion. If we have two planes A, A' two lines g, g' and two points P, P' such that P is on g, g is on A, P' is on g' and g' is on A' then there exist a motion mapping A to A', g to g' and P to P' There is a plane A, a line g, and a point P such that P is on g and g is on A then there exist four motions mapping A, g and P onto themselves, respectively, and not more than two of these motions may have every point of g as a fixed point, while there is one of them (i.e. the identity) for which every point of A is fixed. There exists three points A, B, P on line g such that P is between A and B and for every point C (unequal P) between A and B there is a point D between C and P for which no motion with P as fixed point can be found that will map C onto a point lying between D and P. Axioms 2 to 4 imply that motions form a group. Axiom 5 means that the group of motions provides group actions on R that are transitive so that there is a motion that maps every line to every line
Mathematics
Geometry: General
null
24886645
https://en.wikipedia.org/wiki/Climate%20change%20in%20Canada
Climate change in Canada
Climate change is greatly impacting Canada's environment and landscapes. Extreme weather has become more frequent and severe because of the continued release of greenhouse gases into the atmosphere. The number of climate change–related events, such as the 2021 British Columbia Floods and an increasing number of forest fires, has become an increasing concern over time. Canada's annual average temperature over land warmed by between 1948 and 2016. The rate of warming is highest in Canada's north, the Prairies, and northern British Columbia. The country's precipitation has increased in recent years and wildfires expanded from seasonal events to year-round threats. Canada was the world's 11th highest emitter of carbon dioxide (CO2) and as of 2021 the 7th highest emitter of greenhouse gases. Canada has a long history of producing industrial emissions going back to the late 19th century. In 2022 transport, oil and gas extraction, and fugitive emissions together emitted 82% of the country's total emissions. From 1990 to 2022, GHG emissions from conventional oil production increased by 24%, those from multi-stage fracturing techniques increased by 56%, and emissions from oil sands production increased by 467%. Canada committed to reducing its greenhouse gas (GHG) emissions by 30% below 2005 levels by 2030 under the Paris Agreement. In July 2021, Canada enhanced the Paris Agreement plans with a new goal of reducing emissions by 40–45% below 2005 levels by 2030, enacting the Canadian Net-Zero Emissions Accountability Act. In 2019, the House of Commons voted to declare a national climate emergency in Canada. Several climate change mitigation policies have been implemented in the country, such as carbon pricing, emissions trading and climate change funding programs. Greenhouse gas emissions Climate change is the result of greenhouse gas emissions, which are produced by human activity. Canada was the world's 7th largest greenhouse gas emitter in terms of GHG Inventory data, as of 2021. In 2020, Canada emitted a total of 678 million tons of carbon dioxide equivalent (Mt COeq) into the atmosphere. This represents a decrease from 1.8% of global emissions (730 Mt COeq) in 2005 to 1.5% in 2020, but still an increase from 602 Mt COeq in 1990. In 2022, Canada’s GHG emissions were 708  Mteq, still below pre-pandemic (2019) emissions, but an increase of 9.3 Mt (1.3%) compared to 2021. The WRI's Climate Analysis Indicators Tool estimates that, between 1950 and 2000, Canada had the highest greenhouse gas emissions per capita of any first world countries. In 2020, of all the G20 countries, Canada was second only to Saudi Arabia for greenhouse gas emissions per capita. Canada has one of the heaviest climate debts in the world, with a very long history of producing industrial greenhouse gas emissions. Canada is the 10th heaviest cumulative emitter as assessed by model-based land-use mitigation measures, with 2.6% of cumulative emissions. Canada's 65.5 billion tonnes of carbon come roughly equally from use of fossil fuels and from deforestation and land use. Energy consumption Electricity consumption in Canada in 2017 accounted for 74 carbon dioxide equivalent Mt COeq, or 10% of the country's emissions. This sector's climate footprint significantly reduced in recent decades due to the closure of many coal-fired power stations. As of 2017, 81% of Canada's electricity is produced by non-emitting energy sources, such as hydro, nuclear, solar or wind power. Fossil fuels provide 19% of Canadian electric power, about half as coal (9% of the total) and the remainder a mix of natural gas and oil. Only five provinces use coal for electricity generation. Alberta, Saskatchewan, and Nova Scotia rely on coal for nearly half their generation while other provinces and territories use little or none. Alberta and Saskatchewan also use a substantial amount of natural gas. Remote communities including all of Nunavut and much of the Northwest Territories produce most of their electricity from diesel generators, at high economic and environmental cost. The federal government has set up initiatives to reduce dependence on diesel-fired electricity. Transportation Canada is a large country with a low population density, so transportation – often in cold weather when fuel efficiency drops – is a big part of the economy. In 2017, 24% of Canada's greenhouse gases (GHG)s came from trucks, trains, airplanes and cars. The vast majority of Canadian emissions from transportation come from road transportation, accounting for 144 Mt COeq, or 20% of total emissions. These originate for individual cars, but also from long-haul trucks, which are used to transport most goods across the country. In 2018, the Canadian truck industry delivered 63.7 million shipments. In 2019, Canadian factories produced 1.4 million new trucks, more than triple the Canadian car production. The Canadian domestic aviation industry, represented largely by the country's two main airlines (Air Canada and Westjet), produced 7.1 Mt COeq in 2017 and account for 1% of Canada's total greenhouse gas emission. Fossil fuel production The most pollutant industry in terms of GHG emissions in Canada is the oil and gas sector. This industry produces 195 Mt COeq every year, which is 27% of the national total. Driven by the high emissions required for the exploitation of the tar sands in Alberta, greenhouse gas emissions from this sector increased by 84% from 1990 to 2017. Industrial emissions In 2017, Canadian heavy industry emitted 73 Mt COeq, or 10% of Canada's total greenhouse gas emission. This represents a 25% drop in emissions in this category since 1990. This data is consistent with the rapid decline of manufacturing in Canada. Deforestation Canada's deforestation rate is one of the lowest in the world, at 0.02 percent per year. This rate of deforestation has been reducing every year since 1985. According to Environment and Climate Change Canada (ECCC), "Harvested wood products" in Canada account for 130 Mt COeq of greenhouse gas emissions. This would represent 18% of the country's emissions in 2017, but ECCC exclude this number from its national total. Also excluded from the total is ECCC's calculation that Canada's forests reduce greenhouse gas emissions by 150 Mt COeq. Before 2015, ECCC used to calculate a 160 Mt COeq reduction from its forest, a sign of their slow but continued deterioration. Impacts on the natural environment In recent decades, Canada has experienced increased average temperatures, increased precipitation, and more extreme weather events. These trends are expected to continue over the next century. ECCC determined it was extremely likely that these changes were the result of increased greenhouse gas emissions driven by human activity. Temperature and weather changes Annual average temperatures in Canada increased by 1.7 °C between 1948 and 2016. These weather changes have not been uniform across regions. British Columbia, the Prairie provinces and Northern Canada experienced warming the most, with an annual increase of 2.3 °C for northern Canada. Meanwhile, some Maritime areas of southeast Canada experienced average warming of less than 1 °C during the same period. In addition, these trends were not uniform across the seasons. Average winter temperatures rose by 3.3 °C between 1948 and 2016 while average summer temperature only rose by 1.5 °C. According to Environment and Climate Change Canada "warming over the 20th century is indisputable and largely due to human activities" adding "Canada's rate of warming is about twice the global rate: a 2° C increase globally means a 3 to 4 °C increase for Canada". ECCC lists impacts of climate change consistent with global changes. Temperature-related changes include longer growing season, more heatwaves and fewer cold spells, thawing permafrost, earlier river ice break-up, earlier spring runoff, and earlier budding of trees. Meteorological changes include an increase in precipitation and more snowfall in northwest Arctic. Precipitation ECCC summarized annual precipitation changes to support biodiversity assessments by the Canadian Council of Resource Ministers. Evaluating records up to 2007 they observed: "Precipitation has generally increased over Canada since 1950 with the majority of stations with significant trends showing increases. The increasing trend is most coherent over northern Canada where many stations show significant increases. There is not much evidence of clear regional patterns in stations showing significant changes in seasonal precipitation except for significant decreases which tend to be concentrated in the winter season over southwestern and southeastern Canada. While the previous sentence might be technically correct in part, all seasons show increased precipitation in Canada, especially in the Winter, Spring, and Fall months. Also, increasing precipitation over the Arctic appears to be occurring in all seasons except summer." ECCC climate specialists have assessed trends in short-duration rainfall patterns using Engineering Climate Datasets: "Short-duration (5 minutes to 24 hours) rainfall extremes are important for a number of purposes, including engineering infrastructure design, because they represent the different meteorological scales of extreme rainfall events." A "general lack of a detectable trend signal", meaning no overall change in extreme, short-duration rainfall patterns was observed in the single station analysis. In relation to design criteria used for traditional water management and urban drainage design practice (e.g., Intensity-Duration-Frequency (IDF) statistics), the evaluation "shows that fewer than 5.6% and 3.4% of the stations have significant increasing and decreasing trends, respectively, in extreme annual maximum single location observation amounts." On a regional basis, southwest and the east (Newfoundland) coastal regions generally showed significant increasing regional trends for 1- and 2-hour extreme rainfall durations. Decreasing regional trends for 5 to 15 minute rainfall amounts were observed in the St. Lawrence region of southern Quebec and in the Atlantic provinces. Extreme weather events The extreme weather events of greatest concern in Canada include heavy rain and snow falls, heat waves, and drought. They are linked to flooding and landslides, water shortages, forest fires, reduced air quality, as well as costs related to damage to property and infrastructure, business disruptions, and increased illness and mortality. Heat waves, including those in the summer of 2009, 2012, and 2021, are associated with increases in heat stroke and respiratory illness. Sea level rise Coastal flooding is expected to increase in many areas of Canada due to global sea-level rise and local land subsidence or uplift. The country's sea level is increasing between 1 and 4.5 mm per year. The areas that are going to have the biggest strike is southwest and southeastern Canada. Ecosystems Boreal forest According to Environment Canada's 2011 annual report, there is evidence that some regional areas within the western Canadian boreal forest have increased by 2 °C since 1948. The rate of the changing climate is leading to drier conditions in the boreal forest, which leads to a whole host of subsequent issues. As a result of the rapidly changing climate, trees are migrating to higher latitudes and altitudes (northward), but some species may not be migrating fast enough to follow their climatic habitat. Moreover, trees within the southern limit of their range may begin to show declines in growth. Drier conditions are also leading to a shift from conifers to aspen in more fire and drought-prone areas. Climate change creates more fire-prone conditions in the Boreal forest of Canada. In 2016, Northern Alberta witnessed the effects of climate change in a dramatic manner when a "perfect storm" of El Niño and global warming contributed to the Fort McMurray wildfire, which led to the evacuation of the oil-producing town at the heart of the tar sands industry. The area has witnessed an increased frequency of wildfires, as Canada's wildfire season now starts a month earlier than it used to and the annual area burned is twice what it was in 1970. In 2023, fires in Canada were estimated to have released 480 megatonnes of carbon, 23% of the world's wildfire-related carbon emissions for the year. By 2024, wildfires in the northwest had shifted from a seasonal occurrence to a year-round phenomenon. As to 2019, climate change has already increased wildfires frequency and power in Canada, especially in Alberta. "We are seeing climate change in action," says University of Alberta wildland fire Prof. Mike Flannigan. "The Fort McMurray fire was to six times more likely because of climate change. The 2017 record-breaking B.C. fire season was seven to 11 times more likely because of climate change." The mountain pine beetle epidemic raged from 1996 to 2015 as a result of milder winters in the boreal forest, allowing for the proliferation of the parasite. It resulted in 18 million hectares of dead trees and economic impacts for forest-dependent communities. Arctic Annual mean temperature over Northern Canada increased by 2.3 °C (likely range 1.7 °C–3.0 °C), which is approximately three times the global mean warming rate. The strongest rates of warming were observed in the northernmost regions of Yukon and the Northwest Territories where annual mean temperature increases of about 3.5 °C were observed between 1948 and 2016. Climate change melts ice and increases the mobility of the ice. In May and June 2017 dense ice – up to 8 metres (25 ft) thick – was in the waters off the northern coast of Newfoundland, trapping fishing boats and ferries. Socioeconomic impact Agriculture and food production During the drought of 2002, Ontario had a good season and produced enough crops to send a vast amount of hay to those hit the hardest in Alberta. However this is not something that can or will be expected every time there is a drought in the prairie provinces. This causes a great deficit in income for many as they are buying heads of cattle for high prices and selling them for very low prices. By looking at historical forecasts, there is a strong indication that there is no true way to estimate or to know the amount of rain to expect for the upcoming growing season. This does not allow for the agricultural sector to plan accordingly. In Alberta there has been a trend of high summer temperatures and low summer precipitation. This has led much of Alberta to face drought conditions. Drought conditions are harming the agriculture sector of this province, mainly the cattle ranching area. When there is a drought there is a shortage of feed for cattle (hay, grain). With the shortage on crops ranchers are forced to purchase the feed at the increased prices while they can. Those who cannot afford to pay top money for feed are forced to sell their herds. Health The Public Health Agency of Canada reported that the number of reported Lyme disease cases in Canada increased from 144 cases in 2009 to 2,025 cases in 2017. Dr. Duncan Webster, an infectious disease consultant at Saint John Regional Hospital, links this increase in disease incidence to the increase in the population of blacklegged ticks. The tick population has increased due largely to shorter winters and warmer temperatures associated with climate change. Indigenous peoples Inuit who reside in Canada are facing significant difficulty maintaining their traditional food systems because of climate change. The Inuit have hunted mammals for hundreds of years. Many of their traditional economic transactions and cultural ceremonies were and still are centred around whales and other marine mammals. Climate change is causing the ocean to warm up and acidify, negatively impacting these species in these traditional areas and causing many to move elsewhere. While some believe a warming Arctic would cause food insecurity, already a problem for Canadian Inuit, to increase by taking away some of their primary food sources, others point to the resilience they have displayed in the past to changing temperatures and believe they will likely be able to adapt. Although ancestors to the modern Inuit would travel to other places in the Arctic based on these animals and adapt to changing migration routes, modern geopolitical boundaries and laws would likely prevent this from happening to the extent necessary to preserve these traditional food systems. Regardless of whether they can successfully modify their marine food systems, they will lose certain aspects of their culture. To hunt these whales and other marine mammals, they have used the same traditional tools for generations. Without these animals providing them subsistence, a core part of their culture would become obsolete. The Inuit are also losing their access to ringed seal and polar bears, two key animals that are essential to the traditional Inuit diet. Climate change has led to drastic drops in the ringed seal population, which has led to serious harm to the Inuit subsistence winter economy. The ringed seal is the most prevalent subsistence species in all of Nunavut, with respect to both land and water. Without the ringed seal, the Inuit would lose their sense of ningiqtuq, or their cultural form of resource sharing. Ringed seal meat is one of the core meats of this type of sharing and has been utilized in this system for hundreds of years. With climate change, ningiqtuq would be drastically altered. Also, the ringed seal embodies the ideals of sharing, unity, and collectivism because of ningiqtuq. Its decline signifies loss of Inuit identity. The polar bear population is also declining because of climate change. Polar bears rely on ringed seals for food, so both of their declines are correlated. This decline is also harming ningiqtuq as polar bear meat is shared among Inuit. For the Gwichʼin people, an Anthabaskan-speaking First Nations in Canada, caribou are central to their culture. They have coexisted with the Gwichʼin for thousands of years. As a result, their entire culture is at immediate risk. Caribou numbers are rapidly declining due to warmer temperatures and melting ice. Sarah James, a prominent Alaskan Gwichʼin activist, said, "We are the caribou people. Caribou are not just what we eat; they are who we are. They are the stories and songs and the whole way we see the world. Caribou are our life. Without caribou, we wouldn't exist." Insurance claims Climate change has led to increased costs from insurance claims from more severe wildfires and storms. , eight of Canada's 10 costliest natural disasters have occurred since 2013, though this does not account for the 2024 floods in Toronto and Montreal, nor a massive hailstorm in Calgary. The most expensive loss has been the 2016 Fort McMurray wildfire, which cost $5.96 billion. Wood industry Climate change causes challenges for the sustainable management and conservation of forests. It will have a direct impact on the productivity of the wood industry, as well as the health and regeneration of trees. The assisted migration of forests has been proposed as way to help the wood industry adapt to climate change. Mitigation and adaptation Policies and legislation (national level) Chretien government The Government of Canada Action Plan 2000 on Climate Change was passed by the Chretien government in its 36th Canadian Parliament incarnation as part of its implementation of the 1997 Kyoto Accord. Harper Government (2006–2015) Under the tenure of Stephen Harper, who was Prime Minister from 2006 to 2015, the Clean Air Act was unveiled in October 2006. In 2009, Canada's two largest provinces, Ontario and Quebec, became wary of federal policies shifting the burden of greenhouse reductions on them in order to give Alberta and Saskatchewan more room to further develop their oil sands reserves. In 2010 Graham Saul, who represented the Climate Action Network Canada (CAN) – a coalition of 60 non-governmental organisations – commented on the 40-page CAN report "Troubling Evidence" which claimed that, In 2011 the Kyoto Accord was abandoned. By 2014 award-winning American/Canadian limnologist, David Schindler, argued that Harper's administration had put "economic development ahead of all other policy objectives", in particular the environment. Trudeau Government (2015–present) In its 2015 election platform, Justin Trudeau promised to tackle climate change, notably by phasing out fossil fuel subsidies, attending the 2015 Paris Climate Change Conference, developing a North American clean energy and environmental agreement with the United States and Mexico, and creating a $2 Billion Low Carbon Economy Trust. Trudeau made good on the three latter promises. However, he introduced new fossil fuel subsidies during his time in office. Trudeau's Foreign affairs Minister was Stéphane Dion from 2015 to 2017. Dion is known as being very supportive of climate change policies. Catherine McKenna was Trudeau's Minister to the Environment and Climate Change from 2015 to 2019. McKenna is known for her legal work surrounding social justice. Pan-Canadian Framework on Clean Growth and Climate Change, Trudeau's national climate strategy, was released in August 2017. Provincial premiers (except Saskatchewan and Manitoba) adopted the proposal on December 9, 2016. The core of the proposal is to implement carbon pricing regimes nationwide. The federal minister of Environment and Climate Change Canada, Catherine McKenna states that carbon taxes has been shown to be the most economical way of reducing emissions. In April 2019, Environment commissioner Julie Gelfand described the country's lack of progress in reducing emissions as "disturbing" and noted that it was on track to miss its climate change targets. In 2019, Environment and Climate Change Canada (ECCC) released a report called Canada's Changing Climate Report (CCCR). It is essentially a summary of the IPCC 5th Assessment Report, customised for Canada. The report states that coastal flooding is expected to increase in many areas due to global sea-level rise and local land subsidence or uplift. The government of Justin Trudeau promised to step up the targets for the year 2030 and reach carbon neutrality in 2050. In 2020 it introduced a bill that will require the country to reach zero emission by 2050. Even though fossil fuels will be phased out in "the medium term" Trudeau has stated that the Kinder Morgan Pipeline will be built. The federal government has also approved the Woodfibre LNG Terminal in Vancouver. The Trudeau government has introduced a carbon tax. This tax was set at $20 a tonne in 2018 and will increase by $10 a year until it reaches $50 in 2022. It also places levies on natural gas, pump gas, propane, butane, and aviation fuel. Ontario Premier Doug Ford, Albertan Premier Jason Kenney (UCP) and Manitoba Premier Brian Pallister (PC) took the federal government to court on April 15, 2019, and the court ruled in favor (3–2) of the constitutionality of the carbon tax. Following on a motion by prime minister Justin Trudeau, on June 12, 2019, the House of Commons voted to declare a national climate emergency. In December 2020 the government of Justin Trudeau introduced a bill that will require the country to reach zero emission by 2050 (Climate Change Action Plan 2001). International cooperation Canada is a signatory to the Kyoto Protocol. However, the Liberal government that later signed the accord took little action towards meeting Canada's greenhouse gas emission targets. Although Canada committed itself to a 6% reduction below the 1990 levels for the 2008–2012 as a signatory to the Kyoto Protocol, the country did not implement a plan to reduce greenhouse gasses emissions. Soon after the 2006 federal election, the new minority government of Conservative Prime Minister Stephen Harper announced that Canada could not and would not meet Canada's commitments. The House of Commons passed several opposition-sponsored bills calling for government plans for the implementation of emission reduction measures. Canadian and North American environmental groups feel that Canada lacks credibility on environmental policy and regularly criticize Canada in international venues. In the last few months of 2009, Canada's attitude was criticized at the Asia-Pacific Economic Co-operation (APEC) conference, at the Commonwealth summit, and the Copenhagen conference. In 2011, Canada, Japan and Russia stated that they would not take on further Kyoto targets. The Canadian government invoked Canada's legal right to formally withdraw from the Kyoto Protocol on December 12, 2011. Canada was committed to cutting its greenhouse emissions to 6% below 1990 levels by 2012, but in 2009 emissions were 17% higher than in 1990. Environment minister Peter Kent cited Canada's liability to "enormous financial penalties" under the treaty unless it withdrew. Canada's decision was strongly criticized by representatives of other ratifying countries, including France and China. Paris Agreement The Paris Agreement is a legally binding international agreement. Its main goal is to limit global warming to below 1.5 degrees Celsius, compared to pre-industrial levels. The Nationally Determined Contributions (NDCs) are the plans to fight climate change adapted for each country. Every party in the agreement has different targets based on its own historical climate records and country's circumstances and all the targets for each country are stated in their NDC. Climate action tracker (CAT) is an independent scientific analysis that tracks government climate action and measures it against the globally agreed Paris Agreement. Climate action tracker found Canada actions to be "insufficient". Policies and legislation (provincial level) Mitigation In the mid-2000s, mitigation measures in some provinces moved forward, though the federal government under Stephen Harper was did not develop a federal monitoring and credible reduction regime. Several provincial governments established programs to reduce emissions in their respective territories. These measure were later integrated in the Pan-Canadian Framework on Clean Growth and Climate Change under the premiership of Justin Trudeau. Ontario premier Doug Ford has been very vocal about his opposition to these programs, and abolished them when he came to office in Ontario. He maintains that the federal carbon tax imposed on his province will cause a recession. Economists have studied the issue and do not agree, citing the example of British Columbia, which has had a carbon tax since 2008 causing no economic downturn for the province. Alberta Alberta has an established "Climate Change Action Plan", released in 2008. The Specified Gas Emitters Regulation in Alberta made it the first jurisdiction in North America to have a price on carbon in 2007. and was renewed to 2017 with increased stringency. It requires "large final emitters", defined as facilities emitting more than 100,000 t COeq per year, to comply with an emission intensity reduction which increases over time and caps at 12% in 2015, 15% in 2016 and 20% in 2017. Facilities have several options for compliance. They may actually make reductions, pay into the Climate Change and Emission Management Fund (CCEMF), purchase credits from other large final emitters or purchase credits from non-large final emitters in the form of offset credits. Criticisms against the intensity-based approach to pricing carbon include the fact that there is no hard cap on emissions and actual emissions may always continue to rise despite the fact that carbon has a price. Benefits of an intensity-based system include the fact that during economic recessions, the carbon intensity reduction will remain equally as stringent and challenging, while hard caps tend to become easily met, irrelevant and do not work to reduce emissions. Alberta has also been criticized that its goals are too weak, and that the measures enacted are not likely to achieve the goals. In 2015, the newly elected government committed to revising the climate change strategy. As of 2008, Alberta's electricity sector was the most carbon-intensive of all Canadian provinces and territories, with total emissions of 55.9 million tonnes of equivalent in 2008, accounting for 47% of all Canadian emissions in the electricity and heat generation sector. In November 2015, Premier Rachel Notley unveiled plans to increase the province's carbon tax to $20 per tonne in 2017, increasing further to $30 per tonne by 2018. This policy shift came about partly because of the rejection of the Keystone XL pipeline, which the premier likened to a "kick in the teeth". The province's new climate policies also include phasing out coal-fired power plants by 2030, and cutting emissions of methane by 45% by 2025. British Columbia BC has announced many ambitious policies to address climate change mitigation, particularly through its Climate Action Plan, released in 2008. It has set legislated greenhouse gas reduction targets of 33% below 2007 levels by 2020 and 80% by 2050. BC's revenue neutral carbon tax is the first of its kind in North America. It was introduced at $10/tonne of COeq in 2008 and has risen by $5/tonne annual increases until it reached $30/tonne in 2012. In 2021, the carbon tax increased from $40/tonne to $45/tonne, and is scheduled to reach $50/tonne in 2022. It is required in legislation that all revenues from the carbon tax are returned to British Columbians through tax cuts in other areas. BC's provincial public sector organizations became the first in North America to be considered carbon neutral in 2010, partly by purchasing carbon offsets. The Clean Energy Vehicles Program provides incentives for the purchase of approved clean energy vehicles and for charging infrastructure installation. There has been action across sectors including financing options and incentives for building retrofits, a Forest Carbon Offset Protocol, a Renewable and Low Carbon Fuel Standard, and landfill gas management regulation. BC's GHG emissions have been going down, and in 2012 (based on 2010 data) BC declared it was within reach of meeting its interim target of a 6% reduction below 2007 levels by 2012. GHG emissions went down by 4.5% between 2007 and 2010, and consumption of all the main fossil fuels are down in BC as well while GDP and population have both been growing. In 2018 it was announced that the province "after stalling on sustained climate action for several years, admitted they could not meet their 2020 target", the 33% reduction target had stalled at 6.5%. Provincially BC is the second-largest consumer of natural gas at 2.3 billion cubic feet per day. Ontario In August 2007, the Ontario government released Go Green: Ontario's Action Plan on Climate Change. The plan established three targets: a 6% reduction in emissions by 2014, 15% by 2020 and 80% by 2050. The government has committed to report annually on the actions it is taking to reduce emissions and adapt to climate change. With the initiatives currently in place, the government projects it will achieve 90% of the reductions needed to meet its 2014 target, and only 60% of those needed to meet the 2020 target. The largest emissions reductions to date have come from the phase-out of coal-fired power generation by Ontario Power Generation. In August 2007, the government issued a regulation that required the end of coal burning at Ontario's four remaining coal-fired power plants by the end of 2014. Since 2003, emissions from these plants have dropped from 36.5 Mt to 4.2 Mt. In January 2013, the government announced that coal will be completely phased out one year early, by the end of 2013. The last coal generating station was closed on April 8, 2014, in Thunder Bay. Through the Green Energy and Green Economy Act, 2009 Ontario implemented a feed-in tariff to promote the development of renewable energy generation. Ontario is also a member of the Western Climate Initiative. In January 2013, a discussion paper was posted on the Environmental Registry seeking input on the development of a greenhouse gas emissions reduction program for industry. Over the years, transportation emissions have continued to increase. Growing from 44.8 Mt in 1990 to 59.5 Mt in 2010, transportation is responsible for the largest amount of greenhouse gas emissions in the province. Efforts to reduce these emissions include investing in public transit and providing incentives for the purchase of electric vehicles. The government also recognizes the need for climate change adaptation and, in April 2011, released Climate Ready: Ontario's Adaptation Strategy and Action Plan 2011–2014. As required by the Environmental Bill of Rights, 1993, the Environmental Commissioner of Ontario does an independent review and reports annually to the Legislative Assembly of Ontario on the progress of activities in the province to reduce greenhouse gas emissions. On June 7, 2018, the Progressive Conservative Party of Ontario under Doug Ford was elected to a majority government. Since then there has been a great deal of controversy regarding the environmental policies of his government. Among the changes to environmental policy by Ford's government were the withdrawal of Ontario from the Western Climate Initiative emissions trading system, which had been implemented by the previous Liberal government, and eliminating the office of the Environmental Commissioner of Ontario, a non-partisan officer of the Legislative Assembly of Ontario charged with enforcing Ontario's Environmental Bill of Rights (EBR). The Ford government released a report indicating that the duties of the Environmental Commissioner would be transferred to the Auditor General of Ontario. Other criticisms levelled by Mike Schreiner of the Green Party of Ontario include cuts to the Ministry of the Environment, Conservation and Parks as well as making unspecified changes to the Endangered Species Act. Quebec Greenhouse gas emissions increased by 3.8% in Quebec between 1990 and 2007, to 85.7 megatonnes of CO equivalent before falling to 81.7 in 2015. At 9.9 tonnes per capita, Quebec's emissions are well below the Canadian average (20.1 tonnes) and accounted for 11.1% of Canada's total in 2015. Emissions in the electricity sector spiked in 2007, due to the operation of the TransCanada Energy combined cycle gas turbine in Becancour. The generating station, Quebec's largest source of greenhouse gas emissions that year, released 1,687,314 tonnes of CO equivalent in 2007 or 72.1% of all emissions from the sector and 2% of total emissions. The plant was closed in 2008 in 2009 and in 2010. Between 1990 – the reference year of the Kyoto Protocol – and 2006, Quebec's population grew by 9.2% and Quebec's GDP of 41.3%. The emission intensity relative to GDP declined from 28.1% during this period, dropping from 4,500 to 3,300 tonnes of CO equivalent per million dollars of gross domestic product (GDP). In May 2009, Quebec became the first jurisdiction in the Americas to impose an emissions cap after the Quebec National Assembly passed a bill capping emissions from certain sectors. The move was coordinated with a similar policy in the neighboring province of Ontario and reflects the commitment of both provinces as members of the Western Climate Initiative. On November 23, 2009, the Quebec government pledged to reduce its greenhouse gas emissions by 20% below the 1990 base year level by 2020, a goal similar to that adopted by the European Union. The government intends to achieve its target by promoting public transit, electric vehicles and intermodal freight transport. The plan also calls for the increased use of wood as a building material, energy recovery from biomass, and a land use planning reform. As of 2015 the rate of emissions has been reduced by 8.8%. In order to encourage electrification of the transportation sector, Quebec has introduced numerous policies to promote the purchase of electric vehicles. In 2018, the proportion of electric vehicles among all new passenger car sales in Quebec rose to 9.8%. Adaptation Many climate change adaptation policies are within provincial government's jurisdiction. However, adaptation is currently low in their list of environmental priorities, and most provinces have no climate adaptation plan at all. Assisted migration of forests However, some provinces implemented assisted colonization policies to guide their forests to their future optimal range. As the climate gets warmer, tree species' become less adapted to the conditions of their historical southern or downhill range and more adapted to the climatic condition of areas north or uphill of their historical range. In the late 2000s and early 2010s, the Canadian provinces of Alberta and British Columbia modified their tree reseeding guidelines to account for this phenomenon. British Columbia even gave the green light for the relocation of a single species, the Western Larch, 1000 km northward. Policy assessments According to data in 2021, for giving the world a 50% chance of avoiding a temperature rise of 2 degrees or more Canada should increase its climate commitments by 57%. For a 95% chance it should increase the commitments by 160%. For giving a 50% chance of staying below 1.5 degrees Canada should increase its commitments by 215%. Society and culture Activism The Canadian Wildlife Federation (CWF), one of the largest conservation organisations in the country, lobbies for climate change mitigation. According to CWF the organization recognized the need for action in 1977. It published Checkerspot, a now discontinued biannual climate change magazine. Some Canadian groups have also lobbied for fossil fuel divestment. Public opinion According to a 2020 survey of the Canadian Nuclear Association, climate change concerns Canadians more than any other issue. In a 2021 survey, Nanos Research found that 30% of Canadians reported that climate change was their top worry, 2nd place behind inflation (36%) and ahead of the COVID-19 pandemic (29%). Canadians think the threat posed by climate change is higher than their United States counterparts do, but slightly below the median opinion of other nations included in a Pew Research Center survey in 2018. However the majority of Canadians in every electoral riding of every province in Canada believe that climate is changing. Rates of acceptance (belief) for ongoing climate change are highest in British Columbia and Quebec, and lowest in the prairie provinces of Alberta and Saskatchewan. In a survey published by the University of Montreal and colleagues, national belief that the earth was warming was at 83%, while 12% of respondents said the earth was not warming. However, when asked if this warming is due to human activity, only 60% of respondents said "yes". These numbers are consistent with a 2015 survey that showed 85% of Canadians believed the earth was warming, while only 61% felt this warming was due to human activity. Canadian public opinion that human activity is responsible for global warming slightly declined overall from 2007 to 2015. When asked whether their province has already felt the effects of climate change, 70% of Canadians responded "yes". This result was based on a majority of respondents in almost all electoral ridings. At the same time, the three ridings in Alberta where opinion was lowest each polled at 49% "yes", which is just below a majority. National support for action to stop climate change sits at 58%, with similar levels of support for either a cap and trade system (58%) or a direct tax on carbon emissions (54%). A December 2018 Ipsos-Reid poll was conducted to gauge the public's opinion of Doug Ford's environmental policies in Ontario. The poll results were as follows: Negative – 45% Positive – 27% Neutral – 28% In 2021, in the midst of the COP26, a poll concluded that 25% of Canadians were of the opinion that international conferences on climate change were useful to fight climate change. A 2012 Canadian poll, found that 32% of Canadians said they believe climate change is happening because of human activity, while 54% said they believe it's because of human activity and partially due to natural climate variation. 9% believe climate change is occurring due to natural climate variation, and only 2% said they do not believe climate change is occurring at all. Media coverage Statistics on greenhouse gas emissions
Physical sciences
Climate change
Earth science
23416874
https://en.wikipedia.org/wiki/Sense
Sense
A sense is a biological system used by an organism for sensation, the process of gathering information about the surroundings through the detection of stimuli. Although, in some cultures, five human senses were traditionally identified as such (namely sight, smell, touch, taste, and hearing), many more are now recognized. Senses used by non-human organisms are even greater in variety and number. During sensation, sense organs collect various stimuli (such as a sound or smell) for transduction, meaning transformation into a form that can be understood by the brain. Sensation and perception are fundamental to nearly every aspect of an organism's cognition, behavior and thought. In organisms, a sensory organ consists of a group of interrelated sensory cells that respond to a specific type of physical stimulus. Via cranial and spinal nerves (nerves of the central and peripheral nervous systems that relay sensory information to and from the brain and body), the different types of sensory receptor cells (such as mechanoreceptors, photoreceptors, chemoreceptors, thermoreceptors) in sensory organs transduct sensory information from these organs towards the central nervous system, finally arriving at the sensory cortices in the brain, where sensory signals are processed and interpreted (perceived). Sensory systems, or senses, are often divided into external (exteroception) and internal (interoception) sensory systems. Human external senses are based on the sensory organs of the eyes, ears, skin, nose, mouth and the vestibular system. Internal sensation detects stimuli from internal organs and tissues. Internal senses possessed by humans include spatial orientation, proprioception (body position) and nociception (pain). Further internal senses lead to signals such as hunger, thirst, suffocation, and nausea, or different involuntary behaviors, such as vomiting. Some animals are able to detect electrical and magnetic fields, air moisture, or polarized light, while others sense and perceive through alternative systems, such as echolocation. Sensory modalities or sub modalities are different ways sensory information is encoded or transduced. Multimodality integrates different senses into one unified perceptual experience. For example, information from one sense has the potential to influence how information from another is perceived. Sensation and perception are studied by a variety of related fields, most notably psychophysics, neurobiology, cognitive psychology, and cognitive science. Definitions Sensory organs Sensory organs are organs that sense and transduce stimuli. Humans have various sensory organs (i.e. eyes, ears, skin, nose, and mouth) that correspond to a respective visual system (sense of vision), auditory system (sense of hearing), somatosensory system (sense of touch), olfactory system (sense of smell), and gustatory system (sense of taste). Those systems, in turn, contribute to vision, hearing, touch, smell, and the ability to taste. Internal sensation, or interoception, detects stimuli from internal organs and tissues. Many internal sensory and perceptual systems exist in humans, including the vestibular system (sense of balance) sensed by the inner ear and providing the perception of spatial orientation; proprioception (body position); and nociception (pain). Further internal chemoreception- and osmoreception-based sensory systems lead to various perceptions, such as hunger, thirst, suffocation, and nausea, or different involuntary behaviors, such as vomiting. Nonhuman animals experience sensation and perception, with varying levels of similarity to and difference from humans and other animal species. For example, other mammals in general have a stronger sense of smell than humans. Some animal species lack one or more human sensory system analogues and some have sensory systems that are not found in humans, while others process and interpret the same sensory information in very different ways. For example, some animals are able to detect electrical fields and magnetic fields, air moisture, or polarized light. Others sense and perceive through alternative systems such as echolocation. Recent theory suggests that plants and artificial agents such as robots may be able to detect and interpret environmental information in an analogous manner to animals. Sensory modalities Sensory modality refers to the way that information is encoded, which is similar to the idea of transduction. The main sensory modalities can be described on the basis of how each is transduced. Listing all the different sensory modalities, which can number as many as 17, involves separating the major senses into more specific categories, or submodalities, of the larger sense. An individual sensory modality represents the sensation of a specific type of stimulus. For example, the general sensation and perception of touch, which is known as somatosensation, can be separated into light pressure, deep pressure, vibration, itch, pain, temperature, or hair movement, while the general sensation and perception of taste can be separated into submodalities of sweet, salty, sour, bitter, spicy, and umami, all of which are based on different chemicals binding to sensory neurons. Receptors Sensory receptors are the cells or structures that detect sensations. Stimuli in the environment activate specialized receptor cells in the peripheral nervous system. During transduction, physical stimulus is converted into action potential by receptors and transmitted towards the central nervous system for processing. Different types of stimuli are sensed by different types of receptor cells. Receptor cells can be classified into types on the basis of three different criteria: cell type, position, and function. Receptors can be classified structurally on the basis of cell type and their position in relation to stimuli they sense. Receptors can further be classified functionally on the basis of the transduction of stimuli, or how the mechanical stimulus, light, or chemical changed the cell membrane potential. Structural receptor types Location One way to classify receptors is based on their location relative to the stimuli. An exteroceptor is a receptor that is located near a stimulus of the external environment, such as the somatosensory receptors that are located in the skin. An interoceptor is one that interprets stimuli from internal organs and tissues, such as the receptors that sense the increase in blood pressure in the aorta or carotid sinus. Cell type The cells that interpret information about the environment can be either (1) a neuron that has a free nerve ending, with dendrites embedded in tissue that would receive a sensation; (2) a neuron that has an encapsulated ending in which the sensory nerve endings are encapsulated in connective tissue that enhances their sensitivity; or (3) a specialized receptor cell, which has distinct structural components that interpret a specific type of stimulus. The pain and temperature receptors in the dermis of the skin are examples of neurons that have free nerve endings (1). Also located in the dermis of the skin are lamellated corpuscles, neurons with encapsulated nerve endings that respond to pressure and touch (2). The cells in the retina that respond to light stimuli are an example of a specialized receptor (3), a photoreceptor. A transmembrane protein receptor is a protein in the cell membrane that mediates a physiological change in a neuron, most often through the opening of ion channels or changes in the cell signaling processes. Transmembrane receptors are activated by chemicals called ligands. For example, a molecule in food can serve as a ligand for taste receptors. Other transmembrane proteins, which are not accurately called receptors, are sensitive to mechanical or thermal changes. Physical changes in these proteins increase ion flow across the membrane, and can generate an action potential or a graded potential in the sensory neurons. Functional receptor types A third classification of receptors is by how the receptor transduces stimuli into membrane potential changes. Stimuli are of three general types. Some stimuli are ions and macromolecules that affect transmembrane receptor proteins when these chemicals diffuse across the cell membrane. Some stimuli are physical variations in the environment that affect receptor cell membrane potentials. Other stimuli include the electromagnetic radiation from visible light. For humans, the only electromagnetic energy that is perceived by our eyes is visible light. Some other organisms have receptors that humans lack, such as the heat sensors of snakes, the ultraviolet light sensors of bees, or magnetic receptors in migratory birds. Receptor cells can be further categorized on the basis of the type of stimuli they transduce. The different types of functional receptor cell types are mechanoreceptors, photoreceptors, chemoreceptors (osmoreceptor), thermoreceptors, electroreceptors (in certain mammals and fish), and nociceptors. Physical stimuli, such as pressure and vibration, as well as the sensation of sound and body position (balance), are interpreted through a mechanoreceptor. Photoreceptors convert light (visible electromagnetic radiation) into signals. Chemical stimuli can be interpreted by a chemoreceptor that interprets chemical stimuli, such as an object's taste or smell, while osmoreceptors respond to a chemical solute concentrations of body fluids. Nociception (pain) interprets the presence of tissue damage, from sensory information from mechano-, chemo-, and thermoreceptors. Another physical stimulus that has its own type of receptor is temperature, which is sensed through a thermoreceptor that is either sensitive to temperatures above (heat) or below (cold) normal body temperature. Thresholds Absolute threshold Each sense organ (eyes or nose, for instance) requires a minimal amount of stimulation in order to detect a stimulus. This minimum amount of stimulus is called the absolute threshold. The absolute threshold is defined as the minimum amount of stimulation necessary for the detection of a stimulus 50% of the time. Absolute threshold is measured by using a method called signal detection. This process involves presenting stimuli of varying intensities to a subject in order to determine the level at which the subject can reliably detect stimulation in a given sense. Differential threshold Differential threshold or just noticeable difference (JDS) is the smallest detectable difference between two stimuli, or the smallest difference in stimuli that can be judged to be different from each other. Weber's Law is an empirical law that states that the difference threshold is a constant fraction of the comparison stimulus. According to Weber's Law, bigger stimuli require larger differences to be noticed. Magnitude estimation is a psychophysical method in which subjects assign perceived values of given stimuli. The relationship between stimulus intensity and perceptive intensity is described by Steven's power law. Signal detection theory Signal detection theory quantifies the experience of the subject to the presentation of a stimulus in the presence of noise. There is internal noise and there is external noise when it comes to signal detection. The internal noise originates from static in the nervous system. For example, an individual with closed eyes in a dark room still sees something—a blotchy pattern of grey with intermittent brighter flashes—this is internal noise. External noise is the result of noise in the environment that can interfere with the detection of the stimulus of interest. Noise is only a problem if the magnitude of the noise is large enough to interfere with signal collection. The nervous system calculates a criterion, or an internal threshold, for the detection of a signal in the presence of noise. If a signal is judged to be above the criterion, thus the signal is differentiated from the noise, the signal is sensed and perceived. Errors in signal detection can potentially lead to false positives and false negatives. The sensory criterion might be shifted based on the importance of the detecting the signal. Shifting of the criterion may influence the likelihood of false positives and false negatives. Private perceptive experience Subjective visual and auditory experiences appear to be similar across humans subjects. The same cannot be said about taste. For example, there is a molecule called propylthiouracil (PROP) that some humans experience as bitter, some as almost tasteless, while others experience it as somewhere between tasteless and bitter. There is a genetic basis for this difference between perception given the same sensory stimulus. This subjective difference in taste perception has implications for individuals' food preferences, and consequently, health. Sensory adaptation When a stimulus is constant and unchanging, perceptual sensory adaptation occurs. During that process, the subject becomes less sensitive to the stimulus. Fourier analysis Biological auditory (hearing), vestibular and spatial, and visual systems (vision) appear to break down real-world complex stimuli into sine wave components, through the mathematical process called Fourier analysis. Many neurons have a strong preference for certain sine frequency components in contrast to others. The way that simpler sounds and images are encoded during sensation can provide insight into how perception of real-world objects happens. Sensory neuroscience and the biology of perception Perception occurs when nerves that lead from the sensory organs (e.g. eye) to the brain are stimulated, even if that stimulation is unrelated to the target signal of the sensory organ. For example, in the case of the eye, it does not matter whether light or something else stimulates the optic nerve, that stimulation will results in visual perception, even if there was no visual stimulus to begin with. (To prove this point to yourself (and if you are a human), close your eyes (preferably in a dark room) and press gently on the outside corner of one eye through the eyelid. You will see a visual spot toward the inside of your visual field, near your nose.) Sensory nervous system All stimuli received by the receptors are transduced to an action potential, which is carried along one or more afferent neurons towards a specific area (cortex) of the brain. Just as different nerves are dedicated to sensory and motors tasks, different areas of the brain (cortices) are similarly dedicated to different sensory and perceptual tasks. More complex processing is accomplished across primary cortical regions that spread beyond the primary cortices. Every nerve, sensory or motor, has its own signal transmission speed. For example, nerves in the frog's legs have a 90 ft/s (99 km/h) signal transmission speed, while sensory nerves in humans, transmit sensory information at speeds between 165 ft/s (181 km/h) and 330 ft/s (362 km/h). Multimodal perception Perceptual experience is often multimodal. Multimodality integrates different senses into one unified perceptual experience. Information from one sense has the potential to influence how information from another is perceived. Multimodal perception is qualitatively different from unimodal perception. There has been a growing body of evidence since the mid-1990s on the neural correlates of multimodal perception. Philosophy The philosophy of perception is concerned with the nature of perceptual experience and the status of perceptual data, in particular how they relate to beliefs about, or knowledge of, the world. Historical inquiries into the underlying mechanisms of sensation and perception have led early researchers to subscribe to various philosophical interpretations of perception and the mind, including panpsychism, dualism, and materialism. The majority of modern scientists who study sensation and perception take on a materialistic view of the mind. Human sensation General Absolute threshold Some examples of human absolute thresholds for the nine to 21 external senses. Multimodal perception Humans respond more strongly to multimodal stimuli compared to the sum of each single modality together, an effect called the superadditive effect of multisensory integration. Neurons that respond to both visual and auditory stimuli have been identified in the superior temporal sulcus. Additionally, multimodal "what" and "where" pathways have been proposed for auditory and tactile stimuli. External External receptors that respond to stimuli from outside the body are called exteroceptors. Human external sensation is based on the sensory organs of the eyes, ears, skin, vestibular system, nose, and mouth, which contribute, respectively, to the sensory perceptions of vision, hearing, touch, balance, smell, and taste. Smell and taste are both responsible for identifying molecules and thus both are types of chemoreceptors. Both olfaction (smell) and gustation (taste) require the transduction of chemical stimuli into electrical potentials. Visual system (vision) The visual system, or sense of sight, is based on the transduction of light stimuli received through the eyes and contributes to visual perception. The visual system detects light on photoreceptors in the retina of each eye that generates electrical nerve impulses for the perception of varying colors and brightness. There are two types of photoreceptors: rods and cones. Rods are very sensitive to light but do not distinguish colors. Cones distinguish colors but are less sensitive to dim light. At the molecular level, visual stimuli cause changes in the photopigment molecule that lead to changes in membrane potential of the photoreceptor cell. A single unit of light is called a photon, which is described in physics as a packet of energy with properties of both a particle and a wave. The energy of a photon is represented by its wavelength, with each wavelength of visible light corresponding to a particular color. Visible light is electromagnetic radiation with a wavelength between 380 and 720 nm. Wavelengths of electromagnetic radiation longer than 720 nm fall into the infrared range, whereas wavelengths shorter than 380 nm fall into the ultraviolet range. Light with a wavelength of 380 nm is blue whereas light with a wavelength of 720 nm is dark red. All other colors fall between red and blue at various points along the wavelength scale. The three types of cone opsins, being sensitive to different wavelengths of light, provide us with color vision. By comparing the activity of the three different cones, the brain can extract color information from visual stimuli. For example, a bright blue light that has a wavelength of approximately 450 nm would activate the "red" cones minimally, the "green" cones marginally, and the "blue" cones predominantly. The relative activation of the three different cones is calculated by the brain, which perceives the color as blue. However, cones cannot react to low-intensity light, and rods do not sense the color of light. Therefore, our low-light vision is—in essence—in grayscale. In other words, in a dark room, everything appears as a shade of gray. If you think that you can see colors in the dark, it is most likely because your brain knows what color something is and is relying on that memory. There is some disagreement as to whether the visual system consists of one, two, or three submodalities. Neuroanatomists generally regard it as two submodalities, given that different receptors are responsible for the perception of color and brightness. Some argue that stereopsis, the perception of depth using both eyes, also constitutes a sense, but it is generally regarded as a cognitive (that is, post-sensory) function of the visual cortex of the brain where patterns and objects in images are recognized and interpreted based on previously learned information. This is called visual memory. The inability to see is called blindness. Blindness may result from damage to the eyeball, especially to the retina, damage to the optic nerve that connects each eye to the brain, and/or from stroke (infarcts in the brain). Temporary or permanent blindness can be caused by poisons or medications. People who are blind from degradation or damage to the visual cortex, but still have functional eyes, are actually capable of some level of vision and reaction to visual stimuli but not a conscious perception; this is known as blindsight. People with blindsight are usually not aware that they are reacting to visual sources, and instead just unconsciously adapt their behavior to the stimulus. On February 14, 2013, researchers developed a neural implant that gives rats the ability to sense infrared light which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities. Visual perception in psychology According to Gestalt psychology, people perceive the whole of something even if it is not there. The Gestalt's Law of Organization states that people have seven factors that help to group what is seen into patterns or groups: Common Fate, Similarity, Proximity, Closure, Symmetry, Continuity, and Past Experience. The Law of Common fate says that objects are led along the smoothest path. People follow the trend of motion as the lines/dots flow. The Law of Similarity refers to the grouping of images or objects that are similar to each other in some aspect. This could be due to shade, colour, size, shape, or other qualities you could distinguish. The Law of Proximity states that our minds like to group based on how close objects are to each other. We may see 42 objects in a group, but we can also perceive three groups of two lines with seven objects in each line. The Law of Closure is the idea that we as humans still see a full picture even if there are gaps within that picture. There could be gaps or parts missing from a section of a shape, but we would still perceive the shape as whole. The Law of Symmetry refers to a person's preference to see symmetry around a central point. An example would be when we use parentheses in writing. We tend to perceive all of the words in the parentheses as one section instead of individual words within the parentheses. The Law of Continuity tells us that objects are grouped together by their elements and then perceived as a whole. This usually happens when we see overlapping objects. We will see the overlapping objects with no interruptions. The Law of Past Experience refers to the tendency humans have to categorize objects according to past experiences under certain circumstances. If two objects are usually perceived together or within close proximity of each other the Law of Past Experience is usually seen. Auditory system (hearing) Hearing, or audition, is the transduction of sound waves into a neural signal that is made possible by the structures of the ear. The large, fleshy structure on the lateral aspect of the head is known as the auricle. At the end of the auditory canal is the tympanic membrane, or ear drum, which vibrates after it is struck by sound waves. The auricle, ear canal, and tympanic membrane are often referred to as the external ear. The middle ear consists of a space spanned by three small bones called the ossicles. The three ossicles are the malleus, incus, and stapes, which are Latin names that roughly translate to hammer, anvil, and stirrup. The malleus is attached to the tympanic membrane and articulates with the incus. The incus, in turn, articulates with the stapes. The stapes is then attached to the inner ear, where the sound waves will be transduced into a neural signal. The middle ear is connected to the pharynx through the Eustachian tube, which helps equilibrate air pressure across the tympanic membrane. The tube is normally closed but will pop open when the muscles of the pharynx contract during swallowing or yawning. Mechanoreceptors turn motion into electrical nerve pulses, which are located in the inner ear. Since sound is vibration, propagating through a medium such as air, the detection of these vibrations, that is the sense of the hearing, is a mechanical sense because these vibrations are mechanically conducted from the eardrum through a series of tiny bones to hair-like fibers in the inner ear, which detect mechanical motion of the fibers within a range of about 20 to 20,000 hertz, with substantial variation between individuals. Hearing at high frequencies declines with an increase in age. Inability to hear is called deafness or hearing impairment. Sound can also be detected as vibrations conducted through the body. Lower frequencies that can be heard are detected this way. Some deaf people are able to determine the direction and location of vibrations picked up through the feet. Studies pertaining to audition started to increase in number towards the latter end of the nineteenth century. During this time, many laboratories in the United States began to create new models, diagrams, and instruments that all pertained to the ear. Auditory cognitive psychology is a branch of cognitive psychology that is dedicated to the auditory system. The main point is to understand why humans are able to use sound in thinking outside of actually saying it. Relating to auditory cognitive psychology is psychoacoustics. Psychoacoustics is more directed at people interested in music. Haptics, a word used to refer to both taction and kinesthesia, has many parallels with psychoacoustics. Most research around these two are focused on the instrument, the listener, and the player of the instrument. Somatosensory system (touch) Somatosensation is considered a general sense, as opposed to the special senses discussed in this section. Somatosensation is the group of sensory modalities that are associated with touch and interoception. The modalities of somatosensation include pressure, vibration, light touch, tickle, itch, temperature, pain, kinesthesia. Somatosensation, also called tactition (adjectival form: tactile) is a perception resulting from activation of neural receptors, generally in the skin including hair follicles, but also in the tongue, throat, and mucosa. A variety of pressure receptors respond to variations in pressure (firm, brushing, sustained, etc.). The touch sense of itching caused by insect bites or allergies involves special itch-specific neurons in the skin and spinal cord. The loss or impairment of the ability to feel anything touched is called tactile anesthesia. Paresthesia is a sensation of tingling, pricking, or numbness of the skin that may result from nerve damage and may be permanent or temporary. Two types of somatosensory signals that are transduced by free nerve endings are pain and temperature. These two modalities use thermoreceptors and nociceptors to transduce temperature and pain stimuli, respectively. Temperature receptors are stimulated when local temperatures differ from body temperature. Some thermoreceptors are sensitive to just cold and others to just heat. Nociception is the sensation of potentially damaging stimuli. Mechanical, chemical, or thermal stimuli beyond a set threshold will elicit painful sensations. Stressed or damaged tissues release chemicals that activate receptor proteins in the nociceptors. For example, the sensation of heat associated with spicy foods involves capsaicin, the active molecule in hot peppers. Low frequency vibrations are sensed by mechanoreceptors called Merkel cells, also known as type I cutaneous mechanoreceptors. Merkel cells are located in the stratum basale of the epidermis. Deep pressure and vibration is transduced by lamellated (Pacinian) corpuscles, which are receptors with encapsulated endings found deep in the dermis, or subcutaneous tissue. Light touch is transduced by the encapsulated endings known as tactile (Meissner) corpuscles. Follicles are also wrapped in a plexus of nerve endings known as the hair follicle plexus. These nerve endings detect the movement of hair at the surface of the skin, such as when an insect may be walking along the skin. Stretching of the skin is transduced by stretch receptors known as bulbous corpuscles. Bulbous corpuscles are also known as Ruffini corpuscles, or type II cutaneous mechanoreceptors. The heat receptors are sensitive to infrared radiation and can occur in specialized organs, for instance in pit vipers. The thermoceptors in the skin are quite different from the homeostatic thermoceptors in the brain (hypothalamus), which provide feedback on internal body temperature. Gustatory system (taste) The gustatory system or the sense of taste is the sensory system that is partially responsible for the perception of taste (flavor). A few recognized submodalities exist within taste: sweet, salty, sour, bitter, and umami. Very recent research has suggested that there may also be a sixth taste submodality for fats, or lipids. The sense of taste is often confused with the perception of flavor, which is the results of the multimodal integration of gustatory (taste) and olfactory (smell) sensations. Within the structure of the lingual papillae are taste buds that contain specialized gustatory receptor cells for the transduction of taste stimuli. These receptor cells are sensitive to the chemicals contained within foods that are ingested, and they release neurotransmitters based on the amount of the chemical in the food. Neurotransmitters from the gustatory cells can activate sensory neurons in the facial, glossopharyngeal, and vagus cranial nerves. Salty and sour taste submodalities are triggered by the cations and , respectively. The other taste modalities result from food molecules binding to a G protein–coupled receptor. A G protein signal transduction system ultimately leads to depolarization of the gustatory cell. The sweet taste is the sensitivity of gustatory cells to the presence of glucose (or sugar substitutes) dissolved in the saliva. Bitter taste is similar to sweet in that food molecules bind to G protein–coupled receptors. The taste known as umami is often referred to as the savory taste. Like sweet and bitter, it is based on the activation of G protein–coupled receptors by a specific molecule. Once the gustatory cells are activated by the taste molecules, they release neurotransmitters onto the dendrites of sensory neurons. These neurons are part of the facial and glossopharyngeal cranial nerves, as well as a component within the vagus nerve dedicated to the gag reflex. The facial nerve connects to taste buds in the anterior third of the tongue. The glossopharyngeal nerve connects to taste buds in the posterior two thirds of the tongue. The vagus nerve connects to taste buds in the extreme posterior of the tongue, verging on the pharynx, which are more sensitive to noxious stimuli such as bitterness. Flavor depends on odor, texture, and temperature as well as on taste. Humans receive tastes through sensory organs called taste buds, or gustatory calyculi, concentrated on the upper surface of the tongue. Other tastes such as calcium and free fatty acids may also be basic tastes but have yet to receive widespread acceptance. The inability to taste is called ageusia. There is a rare phenomenon when it comes to the gustatory sense. It is called lexical-gustatory synesthesia. Lexical-gustatory synesthesia is when people can "taste" words. They have reported having flavor sensations they are not actually eating. When they read words, hear words, or even imagine words. They have reported not only simple flavors, but textures, complex flavors, and temperatures as well. Olfactory system (smell) Like the sense of taste, the sense of smell, or the olfactory system, is also responsive to chemical stimuli. Unlike taste, there are hundreds of olfactory receptors (388 functional ones according to one 2003 study), each binding to a particular molecular feature. Odor molecules possess a variety of features and, thus, excite specific receptors more or less strongly. This combination of excitatory signals from different receptors makes up what humans perceive as the molecule's smell. The olfactory receptor neurons are located in a small region within the superior nasal cavity. This region is referred to as the olfactory epithelium and contains bipolar sensory neurons. Each olfactory sensory neuron has dendrites that extend from the apical surface of the epithelium into the mucus lining the cavity. As airborne molecules are inhaled through the nose, they pass over the olfactory epithelial region and dissolve into the mucus. These odorant molecules bind to proteins that keep them dissolved in the mucus and help transport them to the olfactory dendrites. The odorant–protein complex binds to a receptor protein within the cell membrane of an olfactory dendrite. These receptors are G protein–coupled, and will produce a graded membrane potential in the olfactory neurons. In the brain, olfaction is processed by the olfactory cortex. Olfactory receptor neurons in the nose differ from most other neurons in that they die and regenerate on a regular basis. The inability to smell is called anosmia. Some neurons in the nose are specialized to detect pheromones. Loss of the sense of smell can result in food tasting bland. A person with an impaired sense of smell may require additional spice and seasoning levels for food to be tasted. Anosmia may also be related to some presentations of mild depression, because the loss of enjoyment of food may lead to a general sense of despair. The ability of olfactory neurons to replace themselves decreases with age, leading to age-related anosmia. This explains why some elderly people salt their food more than younger people do. Vestibular system (balance) The vestibular sense, or sense of balance (equilibrium), is the sense that contributes to the perception of balance (equilibrium), spatial orientation, direction, or acceleration (equilibrioception). Along with audition, the inner ear is responsible for encoding information about equilibrium. A similar mechanoreceptor—a hair cell with stereocilia—senses head position, head movement, and whether our bodies are in motion. These cells are located within the vestibule of the inner ear. Head position is sensed by the utricle and saccule, whereas head movement is sensed by the semicircular canals. The neural signals generated in the vestibular ganglion are transmitted through the vestibulocochlear nerve to the brain stem and cerebellum. The semicircular canals are three ring-like extensions of the vestibule. One is oriented in the horizontal plane, whereas the other two are oriented in the vertical plane. The anterior and posterior vertical canals are oriented at approximately 45 degrees relative to the sagittal plane. The base of each semicircular canal, where it meets with the vestibule, connects to an enlarged region known as the ampulla. The ampulla contains the hair cells that respond to rotational movement, such as turning the head while saying "no". The stereocilia of these hair cells extend into the cupula, a membrane that attaches to the top of the ampulla. As the head rotates in a plane parallel to the semicircular canal, the fluid lags, deflecting the cupula in the direction opposite to the head movement. The semicircular canals contain several ampullae, with some oriented horizontally and others oriented vertically. By comparing the relative movements of both the horizontal and vertical ampullae, the vestibular system can detect the direction of most head movements within three-dimensional (3D) space. The vestibular nerve conducts information from sensory receptors in three ampullae that sense motion of fluid in three semicircular canals caused by three-dimensional rotation of the head. The vestibular nerve also conducts information from the utricle and the saccule, which contain hair-like sensory receptors that bend under the weight of otoliths (which are small crystals of calcium carbonate) that provide the inertia needed to detect head rotation, linear acceleration, and the direction of gravitational force. Internal An internal sensation and perception also known as interoception is "any sense that is normally stimulated from within the body". These involve numerous sensory receptors in internal organs. Interoception is thought to be atypical in clinical conditions such as alexithymia. Specific receptors include: Hunger is governed by a set of brain structures (e.g., the hypothalamus) that are responsible for energy homeostasis. Pulmonary stretch receptors are found in the lungs and control the respiratory rate. Peripheral chemoreceptors in the brain monitor the carbon dioxide and oxygen levels in the brain to give a perception of suffocation if carbon dioxide levels get too high. The chemoreceptor trigger zone is an area of the medulla in the brain that receives inputs from blood-borne drugs or hormones, and communicates with the vomiting center. Chemoreceptors in the circulatory system also measure salt levels and prompt thirst if they get too high; they can also respond to high blood sugar levels in diabetics. Cutaneous receptors in the skin not only respond to touch, pressure, temperature and vibration, but also respond to vasodilation in the skin such as blushing. Stretch receptors in the gastrointestinal tract sense gas distension that may result in colic pain. Stimulation of sensory receptors in the esophagus result in sensations felt in the throat when swallowing, vomiting, or during acid reflux. Sensory receptors in pharynx mucosa, similar to touch receptors in the skin, sense foreign objects such as mucus and food that may result in a gag reflex and corresponding gagging sensation. Stimulation of sensory receptors in the urinary bladder and rectum may result in perceptions of fullness. Stimulation of stretch sensors that sense dilation of various blood vessels may result in pain, for example headache caused by vasodilation of brain arteries. Cardioception refers to the perception of the activity of the heart. Opsins and direct DNA damage in melanocytes and keratinocytes can sense ultraviolet radiation, which plays a role in pigmentation and sunburn. Baroreceptors relay blood pressure information to the brain and maintain proper homeostatic blood pressure. The perception of time is also sometimes called a sense, though not tied to a specific receptor. Non-human animal sensation and perception Human analogues Other living organisms have receptors to sense the world around them, including many of the senses listed above for humans. However, the mechanisms and capabilities vary widely. Smell An example of smell in non-mammals is that of sharks, which combine their keen sense of smell with timing to determine the direction of a smell. They follow the nostril that first detected the smell. Insects have olfactory receptors on their antennae. Although it is unknown to the degree and magnitude which non-human mammals can smell better than humans, humans are known to have far fewer olfactory receptors than mice, and humans have also accumulated more genetic mutations in their olfactory receptors than other primates. Vomeronasal organ Many animals (salamanders, reptiles, mammals) have a vomeronasal organ that is connected with the mouth cavity. In mammals it is mainly used to detect pheromones of marked territory, trails, and sexual state. Reptiles like snakes and monitor lizards make extensive use of it as a smelling organ by transferring scent molecules to the vomeronasal organ with the tips of the forked tongue. In reptiles, the vomeronasal organ is commonly referred to as Jacobson's organ. In mammals, it is often associated with a special behavior called flehmen characterized by uplifting of the lips. The organ is vestigial in humans, because associated neurons have not been found that give any sensory input in humans. Taste Flies and butterflies have taste organs on their feet, allowing them to taste anything they land on. Catfish have taste organs across their entire bodies, and can taste anything they touch, including chemicals in the water. Vision Cats have the ability to see in low light, which is due to muscles surrounding their irides–which contract and expand their pupils–as well as to the tapetum lucidum, a reflective membrane that optimizes the image. Pit vipers, pythons and some boas have organs that allow them to detect infrared light, such that these snakes are able to sense the body heat of their prey. The common vampire bat may also have an infrared sensor on its nose. It has been found that birds and some other animals are tetrachromats and have the ability to see in the ultraviolet down to 300 nanometers. Bees and dragonflies are also able to see in the ultraviolet. Mantis shrimps can perceive both polarized light and multispectral images and have twelve distinct kinds of color receptors, unlike humans which have three kinds and most mammals which have two kinds. Cephalopods have the ability to change color using chromatophores in their skin. Researchers believe that opsins in the skin can sense different wavelengths of light and help the creatures choose a coloration that camouflages them, in addition to light input from the eyes. Other researchers hypothesize that cephalopod eyes in species which only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into color vision, explaining pupils shaped like the letter U, the letter W, or a dumbbell, as well as explaining the need for colorful mating displays. Some cephalopods can distinguish the polarization of light. Spatial orientation Many invertebrates have a statocyst, which is a sensor for acceleration and orientation that works very differently from the mammalian's semi-circular canals. Non-human analogues In addition, some animals have senses that humans lack. Magnetoception Magnetoception (or magnetoreception) is the ability to detect the direction one is facing based on the Earth's magnetic field. Directional awareness is most commonly observed in birds, which rely on their magnetic sense to navigate during migration. It has also been observed in insects such as bees. Cattle make use of magnetoception to align themselves in a north–south direction. Magnetotactic bacteria build miniature magnets inside themselves and use them to determine their orientation relative to the Earth's magnetic field. There has been some recent (tentative) research suggesting that the rhodopsin in the human eye, which responds particularly well to blue light, can facilitate magnetoception in humans. Echolocation Certain animals, including bats and cetaceans, have the ability to determine orientation to other objects through interpretation of reflected sound (like sonar). They most often use this to navigate through poor lighting conditions or to identify and track prey. There is currently an uncertainty whether this is simply an extremely developed post-sensory interpretation of auditory perceptions or it actually constitutes a separate sense. Resolution of the issue will require brain scans of animals while they actually perform echolocation, a task that has proven difficult in practice. Blind people report they are able to navigate and in some cases identify an object by interpreting reflected sounds (especially their own footsteps), a phenomenon known as human echolocation. Electroreception Electroreception (or electroception) is the ability to detect electric fields. Several species of fish, sharks, and rays have the capacity to sense changes in electric fields in their immediate vicinity. For cartilaginous fish this occurs through a specialized organ called the ampullae of Lorenzini. Some fish passively sense changing nearby electric fields; some generate their own weak electric fields, and sense the pattern of field potentials over their body surface; and some use these electric field generating and sensing capacities for social communication. The mechanisms by which electroceptive fish construct a spatial representation from very small differences in field potentials involve comparisons of spike latencies from different parts of the fish's body. The only orders of mammals that are known to demonstrate electroception are the dolphin and monotreme orders. Among these mammals, the platypus has the most acute sense of electroception. A dolphin can detect electric fields in water using electroreceptors in vibrissal crypts arrayed in pairs on its snout and which evolved from whisker motion sensors. These electroreceptors can detect electric fields as weak as 4.6 microvolts per centimeter, such as those generated by contracting muscles and pumping gills of potential prey. This permits the dolphin to locate prey from the seafloor where sediment limits visibility and echolocation. Spiders have been shown to detect electric fields to determine a suitable time to extend web for 'ballooning'. Body modification enthusiasts have experimented with magnetic implants to attempt to replicate this sense. However, in general humans (and it is presumed other mammals) can detect electric fields only indirectly by detecting the effect they have on hairs. An electrically charged balloon, for instance, will exert a force on human arm hairs, which can be felt through tactition and identified as coming from a static charge (and not from wind or the like). This is not electroreception, as it is a post-sensory cognitive action. Hygroreception Hygroreception is the ability to detect changes in the moisture content of the environment. Infrared sensing The ability to sense infrared thermal radiation evolved independently in various families of snakes. Essentially, it allows these reptiles to "see" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes. It was previously thought that the organs evolved primarily as prey detectors, but it is now believed that it may also be used in thermoregulatory decision making. The facial pit underwent parallel evolution in pitvipers and some boas and pythons, having evolved once in pitvipers and multiple times in boas and pythons. The electrophysiology of the structure is similar between the two lineages, but they differ in gross structural anatomy. Most superficially, pitvipers possess one large pit organ on either side of the head, between the eye and the nostril (loreal pit), while boas and pythons have three or more comparatively smaller pits lining the upper and sometimes the lower lip, in or between the scales. Those of the pitvipers are the more advanced, having a suspended sensory membrane as opposed to a simple pit structure. Within the family Viperidae, the pit organ is seen only in the subfamily Crotalinae: the pitvipers. The organ is used extensively to detect and target endothermic prey such as rodents and birds, and it was previously assumed that the organ evolved specifically for that purpose. However, recent evidence shows that the pit organ may also be used for thermoregulation. According to Krochmal et al., pitvipers can use their pits for thermoregulatory decision-making while true vipers (vipers who do not contain heat-sensing pits) cannot. In spite of its detection of IR light, the pits' IR detection mechanism is not similar to photoreceptors – while photoreceptors detect light via photochemical reactions, the protein in the pits of snakes is in fact a temperature-sensitive ion channel. It senses infrared signals through a mechanism involving warming of the pit organ, rather than a chemical reaction to light. This is consistent with the thin pit membrane, which allows incoming IR radiation to quickly and precisely warm a given ion channel and trigger a nerve impulse, as well as vascularize the pit membrane in order to rapidly cool the ion channel back to its original "resting" or "inactive" temperature. Other Pressure detection uses the organ of Weber, a system consisting of three appendages of vertebrae transferring changes in shape of the gas bladder to the middle ear. It can be used to regulate the buoyancy of the fish. Fish like the weather fish and other loaches are also known to respond to low pressure areas but they lack a swim bladder. Current detection is a detection system of water currents, consisting mostly of vortices, found in the lateral line of fish and aquatic forms of amphibians. The lateral line is also sensitive to low-frequency vibrations. The mechanoreceptors are hair cells, the same mechanoreceptors for vestibular sense and hearing. It is used primarily for navigation, hunting, and schooling. The receptors of the electrical sense are modified hair cells of the lateral line system. Polarized light direction/detection is used by bees to orient themselves, especially on cloudy days. Cuttlefish, some beetles, and mantis shrimp can also perceive the polarization of light. Most sighted humans can in fact learn to roughly detect large areas of polarization by an effect called Haidinger's brush; however, this is considered an entoptic phenomenon rather than a separate sense. Slit sensillae of spiders detect mechanical strain in the exoskeleton, providing information on force and vibrations. Plant sensation By using a variety of sense receptors, plants sense light, temperature, humidity, chemical substances, chemical gradients, reorientation, magnetic fields, infections, tissue damage and mechanical pressure. The absence of a nervous system notwithstanding, plants interpret and respond to these stimuli by a variety of hormonal and cell-to-cell communication pathways that result in movement, morphological changes and physiological state alterations at the organism level, that is, result in plant behavior. Such physiological and cognitive functions are generally not believed to give rise to mental phenomena or qualia, however, as these are typically considered the product of nervous system activity. The emergence of mental phenomena from the activity of systems functionally or computationally analogous to that of nervous systems is, however, a hypothetical possibility explored by some schools of thought in the philosophy of mind field, such as functionalism and computationalism. However, plants can perceive the world around them, and might be able to emit airborne sounds similar to "screaming" when stressed. Those noises could not be detectable by human ears, but organisms with a hearing range that can hear ultrasonic frequencies—like mice, bats or perhaps other plants—could hear the plants' cries from as far as away. Artificial sensation and perception Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. Computers take in and respond to their environment through attached hardware. Until recently, input was limited to a keyboard, joystick or a mouse, but advances in technology, both in hardware and software, have allowed computers to take in sensory input in a way similar to humans. Culture In the time of William Shakespeare, there were commonly reckoned to be five wits or five senses. At that time, the words "sense" and "wit" were synonyms, so the senses were known as the five outward wits. This traditional concept of five senses is common today. The traditional five senses are enumerated as the "five material faculties" () in Hindu literature. They appear in allegorical representation as early as in the Katha Upanishad (roughly 6th century BC), as five horses drawing the "chariot" of the body, guided by the mind as "chariot driver". Depictions of the five traditional senses as allegory became a popular subject for seventeenth-century artists, especially among Dutch and Flemish Baroque painters. A typical example is Gérard de Lairesse's Allegory of the Five Senses (1668), in which each of the figures in the main group alludes to a sense: Sight is the reclining boy with a convex mirror, hearing is the cupid-like boy with a triangle, smell is represented by the girl with flowers, taste is represented by the woman with the fruit, and touch is represented by the woman holding the bird. In Buddhist philosophy, Ayatana or "sense-base" includes the mind as a sense organ, in addition to the traditional five. This addition to the commonly acknowledged senses may arise from the psychological orientation involved in Buddhist thought and practice. The mind considered by itself is seen as the principal gateway to a different spectrum of phenomena that differ from the physical sense data. This way of viewing the human sense system indicates the importance of internal sources of sensation and perception that complements our experience of the external world.
Biology and health sciences
Biology
null
26288711
https://en.wikipedia.org/wiki/Coulomb%27s%20law
Coulomb's law
Coulomb's inverse-square law, or simply Coulomb's law, is an experimental law of physics that calculates the amount of force between two electrically charged particles at rest. This electric force is conventionally called the electrostatic force or Coulomb force. Although the law was known earlier, it was first published in 1785 by French physicist Charles-Augustin de Coulomb. Coulomb's law was essential to the development of the theory of electromagnetism and maybe even its starting point, as it allowed meaningful discussions of the amount of electric charge in a particle. The law states that the magnitude, or absolute value, of the attractive or repulsive electrostatic force between two point charges is directly proportional to the product of the magnitudes of their charges and inversely proportional to the square of the distance between them. Coulomb discovered that bodies with like electrical charges repel: Coulomb also showed that oppositely charged bodies attract according to an inverse-square law: Here, is a constant, and are the quantities of each charge, and the scalar r is the distance between the charges. The force is along the straight line joining the two charges. If the charges have the same sign, the electrostatic force between them makes them repel; if they have different signs, the force between them makes them attract. Being an inverse-square law, the law is similar to Isaac Newton's inverse-square law of universal gravitation, but gravitational forces always make things attract, while electrostatic forces make charges attract or repel. Also, gravitational forces are much weaker than electrostatic forces. Coulomb's law can be used to derive Gauss's law, and vice versa. In the case of a single point charge at rest, the two laws are equivalent, expressing the same physical law in different ways. The law has been tested extensively, and observations have upheld the law on the scale from 10−16 m to 108 m. History Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers and pieces of paper. Thales of Miletus made the first recorded description of static electricity around 600 BC, when he noticed that friction could make a piece of amber attract small objects. In 1600, English scientist William Gilbert made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from [elektron], the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Early investigators of the 18th century who suspected that the electrical force diminished with distance as the force of gravity did (i.e., as the inverse square of the distance) included Daniel Bernoulli and Alessandro Volta, both of whom measured the force between plates of a capacitor, and Franz Aepinus who supposed the inverse-square law in 1758. Based on experiments with electrically charged spheres, Joseph Priestley of England was among the first to propose that electrical force followed an inverse-square law, similar to Newton's law of universal gravitation. However, he did not generalize or elaborate on this. In 1767, he conjectured that the force between charges varied as the inverse square of the distance. In 1769, Scottish physicist John Robison announced that, according to his measurements, the force of repulsion between two spheres with charges of the same sign varied as . In the early 1770s, the dependence of the force between charged bodies upon both distance and charge had already been discovered, but not published, by Henry Cavendish of England. In his notes, Cavendish wrote, "We may therefore conclude that the electric attraction and repulsion must be inversely as some power of the distance between that of the and that of the , and there is no reason to think that it differs at all from the inverse duplicate ratio". Finally, in 1785, the French physicist Charles-Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law. This publication was essential to the development of the theory of electromagnetism. He used a torsion balance to study the repulsion and attraction forces of charged particles, and determined that the magnitude of the electric force between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them. The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls and derive his inverse-square proportionality law. Mathematical form Coulomb's law states that the electrostatic force experienced by a charge, at position , in the vicinity of another charge, at position , in a vacuum is equal to where is the displacement vector between the charges, a unit vector pointing from to and the electric constant. Here, is used for the vector notation. The electrostatic force experienced by , according to Newton's third law, is If both charges have the same sign (like charges) then the product is positive and the direction of the force on is given by ; the charges repel each other. If the charges have opposite signs then the product is negative and the direction of the force on is the charges attract each other. System of discrete charges The law of superposition allows Coulomb's law to be extended to include any number of point charges. The force acting on a point charge due to a system of point charges is simply the vector addition of the individual forces acting alone on that point charge due to each one of the charges. The resulting force vector is parallel to the electric field vector at that point, with that point charge removed. Force on a small charge at position , due to a system of discrete charges in vacuum is where is the magnitude of the th charge, is the vector from its position to and is the unit vector in the direction of . Continuous charge distribution In this case, the principle of linear superposition is also used. For a continuous charge distribution, an integral over the region containing the charge is equivalent to an infinite summation, treating each infinitesimal element of space as a point charge . The distribution of charge is usually linear, surface or volumetric. For a linear charge distribution (a good approximation for charge in a wire) where gives the charge per unit length at position , and is an infinitesimal element of length, For a surface charge distribution (a good approximation for charge on a plate in a parallel plate capacitor) where gives the charge per unit area at position , and is an infinitesimal element of area, For a volume charge distribution (such as charge within a bulk metal) where gives the charge per unit volume at position , and is an infinitesimal element of volume, The force on a small test charge at position in vacuum is given by the integral over the distribution of charge The "continuous charge" version of Coulomb's law is never supposed to be applied to locations for which because that location would directly overlap with the location of a charged particle (e.g. electron or proton) which is not a valid location to analyze the electric field or potential classically. Charge is always discrete in reality, and the "continuous charge" assumption is just an approximation that is not supposed to allow to be analyzed. Coulomb constant The constant of proportionality, , in Coulomb's law: is a consequence of historical choices for units. The constant is the vacuum electric permittivity. Using the CODATA 2022 recommended value for , the Coulomb constant is Limitations There are three conditions to be fulfilled for the validity of Coulomb's inverse square law: The charges must have a spherically symmetric distribution (e.g. be point charges, or a charged metal sphere). The charges must not overlap (e.g. they must be distinct point charges). The charges must be stationary with respect to a nonaccelerating frame of reference. The last of these is known as the electrostatic approximation. When movement takes place, an extra factor is introduced, which alters the force produced on the two objects. This extra part of the force is called the magnetic force. For slow movement, the magnetic force is minimal and Coulomb's law can still be considered approximately correct. A more accurate approximation in this case is, however, the Weber force. When the charges are moving more quickly in relation to each other or accelerations occur, Maxwell's equations and Einstein's theory of relativity must be taken into consideration. Electric field An electric field is a vector field that associates to each point in space the Coulomb force experienced by a unit test charge. The strength and direction of the Coulomb force on a charge depends on the electric field established by other charges that it finds itself in, such that . In the simplest case, the field is considered to be generated solely by a single source point charge. More generally, the field can be generated by a distribution of charges who contribute to the overall by the principle of superposition. If the field is generated by a positive source point charge , the direction of the electric field points along lines directed radially outwards from it, i.e. in the direction that a positive point test charge would move if placed in the field. For a negative point source charge, the direction is radially inwards. The magnitude of the electric field can be derived from Coulomb's law. By choosing one of the point charges to be the source, and the other to be the test charge, it follows from Coulomb's law that the magnitude of the electric field created by a single source point charge Q at a certain distance from it r in vacuum is given by A system of n discrete charges stationed at produces an electric field whose magnitude and direction is, by superposition Atomic forces Coulomb's law holds even within atoms, correctly describing the force between the positively charged atomic nucleus and each of the negatively charged electrons. This simple law also correctly accounts for the forces that bind atoms together to form molecules and for the forces that bind atoms and molecules together to form solids and liquids. Generally, as the distance between ions increases, the force of attraction, and binding energy, approach zero and ionic bonding is less favorable. As the magnitude of opposing charges increases, energy increases and ionic bonding is more favorable. Relation to Gauss's law Deriving Gauss's law from Coulomb's law Deriving Coulomb's law from Gauss's law Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law does not give any information regarding the curl of (see Helmholtz decomposition and Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in addition, that the electric field from a point charge is spherically symmetric (this assumption, like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the charge is in motion). In relativity Coulomb's law can be used to gain insight into the form of the magnetic field generated by moving charges since by special relativity, in certain cases the magnetic field can be shown to be a transformation of forces caused by the electric field. When no acceleration is involved in a particle's history, Coulomb's law can be assumed on any test particle in its own inertial frame, supported by symmetry arguments in solving Maxwell's equation, shown above. Coulomb's law can be expanded to moving test particles to be of the same form. This assumption is supported by Lorentz force law which, unlike Coulomb's law is not limited to stationary test charges. Considering the charge to be invariant of observer, the electric and magnetic fields of a uniformly moving point charge can hence be derived by the Lorentz transformation of the four force on the test charge in the charge's frame of reference given by Coulomb's law and attributing magnetic and electric fields by their definitions given by the form of Lorentz force. The fields hence found for uniformly moving point charges are given by:where is the charge of the point source, is the position vector from the point source to the point in space, is the velocity vector of the charged particle, is the ratio of speed of the charged particle divided by the speed of light and is the angle between and . This form of solutions need not obey Newton's third law as is the case in the framework of special relativity (yet without violating relativistic-energy momentum conservation). Note that the expression for electric field reduces to Coulomb's law for non-relativistic speeds of the point charge and that the magnetic field in non-relativistic limit (approximating ) can be applied to electric currents to get the Biot–Savart law. These solutions, when expressed in retarded time also correspond to the general solution of Maxwell's equations given by solutions of Liénard–Wiechert potential, due to the validity of Coulomb's law within its specific range of application. Also note that the spherical symmetry for gauss law on stationary charges is not valid for moving charges owing to the breaking of symmetry by the specification of direction of velocity in the problem. Agreement with Maxwell's equations can also be manually verified for the above two equations. Coulomb potential Quantum field theory The Coulomb potential admits continuum states (with E > 0), describing electron-proton scattering, as well as discrete bound states, representing the hydrogen atom. It can also be derived within the non-relativistic limit between two charged particles, as follows: Under Born approximation, in non-relativistic quantum mechanics, the scattering amplitude is: This is to be compared to the: where we look at the (connected) S-matrix entry for two electrons scattering off each other, treating one with "fixed" momentum as the source of the potential, and the other scattering off that potential. Using the Feynman rules to compute the S-matrix element, we obtain in the non-relativistic limit with Comparing with the QM scattering, we have to discard the as they arise due to differing normalizations of momentum eigenstate in QFT compared to QM and obtain: where Fourier transforming both sides, solving the integral and taking at the end will yield as the Coulomb potential. However, the equivalent results of the classical Born derivations for the Coulomb problem are thought to be strictly accidental. The Coulomb potential, and its derivation, can be seen as a special case of the Yukawa potential, which is the case where the exchanged boson – the photon – has no rest mass. Verification It is possible to verify Coulomb's law with a simple experiment. Consider two small spheres of mass and same-sign charge , hanging from two ropes of negligible mass of length . The forces acting on each sphere are three: the weight , the rope tension and the electric force . In the equilibrium state: and Dividing () by (): Let be the distance between the charged spheres; the repulsion force between them , assuming Coulomb's law is correct, is equal to so: If we now discharge one of the spheres, and we put it in contact with the charged sphere, each one of them acquires a charge . In the equilibrium state, the distance between the charges will be and the repulsion force between them will be: We know that and: Dividing () by (), we get: Measuring the angles and and the distance between the charges and is sufficient to verify that the equality is true taking into account the experimental error. In practice, angles can be difficult to measure, so if the length of the ropes is sufficiently great, the angles will be small enough to make the following approximation: Using this approximation, the relationship () becomes the much simpler expression: In this way, the verification is limited to measuring the distance between the charges and checking that the division approximates the theoretical value.
Physical sciences
Electrostatics
null
26293276
https://en.wikipedia.org/wiki/Reperfusion%20therapy
Reperfusion therapy
Reperfusion therapy is a medical treatment to restore blood flow, either through or around, blocked arteries, typically after a heart attack (myocardial infarction (MI)). Reperfusion therapy includes drugs and surgery. The drugs are thrombolytics and fibrinolytics used in a process called thrombolysis. Surgeries performed may be minimally-invasive endovascular procedures such as a percutaneous coronary intervention (PCI), which involves coronary angioplasty. The angioplasty uses the insertion of a balloon and/or stents to open up the artery. Other surgeries performed are the more invasive bypass surgeries that graft arteries around blockages. If an MI is presented with ECG evidence of an ST elevation known as STEMI, or if a bundle branch block is similarly presented, then reperfusion therapy is necessary. In the absence of an ST elevation, a non-ST elevation MI, known as an NSTEMI, or an unstable angina may be presumed (both of these are indistinguishable on initial evaluation of symptoms). ST elevations indicate a completely blocked artery needing immediate reperfusion. In NSTEMI the blood flow is present but limited by stenosis. In NSTEMI, thrombolytics must be avoided as there is no clear benefit of their use. If the condition stays stable a cardiac stress test may be offered, and if needed subsequent revascularization will be carried out to restore a normal blood flow. If the blood flow becomes unstable an urgent angioplasty may be required. In these unstable cases the use of thrombolytics is contraindicated. At least 10% of treated cases of STEMI do not develop necrosis of the heart muscle. A successful restoration of blood flow is known as aborting the heart attack. About 25% of STEMIs can be aborted if treated within the hour of symptoms onset. Thrombolytic therapy Myocardial infarction Thrombolytic therapy is indicated for the treatment of STEMI – if it can begin within 12 hours of the onset of symptoms, and the person is eligible based on exclusion criteria, and a coronary angioplasty is not immediately available. Thrombolysis is most effective in the first 2 hours. After 12 hours, the risk of intracranial bleeding associated with thrombolytic therapy outweighs any benefit. Because irreversible injury occurs within 2–4 hours of the infarction, there is a limited window of time available for reperfusion to work. Thrombolytic drugs are contraindicated for the treatment of unstable angina and NSTEMI and for the treatment of individuals with evidence of cardiogenic shock. Although no perfect thrombolytic agent exists, ideally it would lead to rapid reperfusion, have a high sustained patency rate, be specific for recent thrombi, be easily and rapidly administered, create a low risk for intracerebral bleeding and systemic bleeding, have no antigenicity, adverse hemodynamic effects, or clinically significant drug interactions, and be cost effective. Currently available thrombolytic agents include streptokinase, urokinase, and alteplase (recombinant tissue plasminogen activator, rtPA). More recently, thrombolytic agents similar in structure to rtPA such as reteplase and tenecteplase have been used. These newer agents boast efficacy at least as well as rtPA with significantly easier administration. The thrombolytic agent used in a particular individual is based on institution preference and the age of the patient. Depending on the thrombolytic agent being used, additional anticoagulation with heparin or low molecular weight heparin may be of benefit. With tPa and related agents (reteplase and tenecteplase), heparin is needed to keep the coronary artery open. Because of the anticoagulant effect of fibrinogen depletion with streptokinase and urokinase treatment, it is less necessary there. Failure Thrombolytic therapy to abort a myocardial infarction is not always effective. The degree of effectiveness of a thrombolytic agent is dependent on the time since the myocardial infarction began, with the best results occurring if the thrombolytic is used within two hours of the onset of symptoms. Failure rates of thrombolytics can be as high as 50%. In cases of failure of the thrombolytic agent to open the infarct-related coronary artery, the person is then either treated conservatively with anticoagulants and allowed to "complete the infarction" or percutaneous coronary intervention (and coronary angioplasty) is then performed. Percutaneous coronary intervention in this setting is known as "rescue PCI" or "salvage PCI". Complications, particularly bleeding, are significantly higher with rescue PCI than with primary PCI due to the action of the thrombolytic. Side effects Intracranial bleeding (ICB) and subsequent stroke is a serious side effect of thrombolytic use. The risk factors for developing intracranial bleeding include a previous episode of intracranial bleed, advanced age of the individual, and the thrombolytic regimen that is being used. In general, the risk of ICB due to thrombolytics is between 0.5 and 1 percent. Coronary angioplasty The benefit of prompt, primary angioplasty over thrombolytic therapy for acute STEMI is now well established. When performed rapidly, an angioplasty restores flow in the blocked artery in more than 95% of patients compared with the reperfusion rate of about 65% achieved by thrombolysis. Logistic and economic obstacles seem to hinder a more widespread application of angioplasty, although the feasibility of providing regionalized angioplasty for STEMI is currently being explored in the United States. The use of a coronary angioplasty to abort a myocardial infarction is preceded by a primary percutaneous coronary intervention. The goal of a prompt angioplasty is to open the artery as soon as possible, and preferably within 90 minutes of the patient presenting to the emergency room. This time is referred to as the door-to-balloon time. Few hospitals can provide an angioplasty within the 90 minute interval, which prompted the American College of Cardiology (ACC) to launch a national Door to Balloon (D2B) Initiative in November 2006. Over 800 hospitals have joined the D2B Alliance as of March 16, 2007. One particularly successful implementation of a primary PCI protocol is in the Calgary Health Region under the auspices of the Libin Cardiovascular Institute of Alberta. Under this model, EMS teams responding to an emergency can transmit the ECG directly to a digital archiving system that allows emergency room staff to immediately confirm the diagnosis. This in turn allows for redirection of the EMS teams to those facilities that are ready to conduct time-critical angioplasty. This protocol has resulted in a median time to treatment of 62 minutes. The current guidelines in the United States restrict angioplasties to hospitals with available emergency bypass surgery as a backup, but this is not the case in other parts of the world. A PCI involves performing a coronary angiogram to determine the location of the infarcting vessel, followed by balloon angioplasty (and frequently deployment of an intracoronary stent) of the stenosed arterial segment. In some settings, an extraction catheter may be used to attempt to aspirate (remove) the thrombus prior to balloon angioplasty. While the use of intracoronary stents do not improve the short term outcomes in primary PCI, the use of stents is widespread because of the decreased rates of procedures to treat restenosis compared to balloon angioplasty. Adjuvant therapy during an angioplasty includes intravenous heparin, aspirin, and clopidogrel. Glycoprotein IIb/IIIa inhibitors are often used in the setting of primary angioplasty to reduce the risk of ischemic complications during the procedure. Due to the number of antiplatelet agents and anticoagulants used during primary angioplasty, the risk of bleeding associated with the procedure is higher than during an elective procedure. Coronary artery bypass surgery Emergency bypass surgery for the treatment of an acute myocardial infarction (MI) is less common than PCI or thrombolysis. From 1995 to 2004, the percentage of people with cardiogenic shock treated with primary PCI rose from 27.4% to 54.4%, while the increase in coronary artery bypass graft surgery (CABG) was only from 2.1% to 3.2%. Emergency CABG is usually undertaken to simultaneously treat a mechanical complication, such as a ruptured papillary muscle, or a ventricular septal defect, with ensuing cardiogenic shock. In uncomplicated MI, the mortality rate can be high when the surgery is performed immediately following the infarction. If this option is entertained, the patient should be stabilized prior to surgery, with supportive interventions such as the use of an intra-aortic balloon pump. In patients developing cardiogenic shock after a myocardial infarction, both PCI and CABG are satisfactory treatment options, with similar survival rates. Coronary artery bypass surgery involves an artery or vein from the patient being implanted to bypass narrowings or occlusions in the coronary arteries. Several arteries and veins can be used, however internal mammary artery grafts have demonstrated significantly better long-term patency rates than great saphenous vein grafts. In patients with two or more coronary arteries affected, bypass surgery is associated with higher long-term survival rates compared to percutaneous interventions. In patients with single vessel disease, surgery is comparably safe and effective, and may be a treatment option in selected cases. Bypass surgery has higher costs initially, but becomes cost-effective in the long term. A surgical bypass graft is more invasive initially but bears less risk of recurrent procedures (but these may be again minimally invasive). Reperfusion arrhythmia Accelerated idioventricular rhythm which looks like slow ventricular tachycardia is a sign of a successful reperfusion. No treatment of this rhythm is needed as it rarely changes into a more serious rhythm.
Biology and health sciences
Treatments
Health
23423919
https://en.wikipedia.org/wiki/Marine%20chemistry
Marine chemistry
Marine chemistry, also known as ocean chemistry or chemical oceanography, is the study of the chemical composition and processes of the world’s oceans, including the interactions between seawater, the atmosphere, the seafloor, and marine organisms. This field encompasses a wide range of topics, such as the cycling of elements like carbon, nitrogen, and phosphorus, the behavior of trace metals, and the study of gases and nutrients in marine environments. Marine chemistry plays a crucial role in understanding global biogeochemical cycles, ocean circulation, and the effects of human activities, such as pollution and climate change, on oceanic systems. It is influenced by plate tectonics and seafloor spreading, turbidity, currents, sediments, pH levels, atmospheric constituents, metamorphic activity, and ecology. The impact of human activity on the chemistry of the Earth's oceans has increased over time, with pollution from industry and various land-use practices significantly affecting the oceans. Moreover, increasing levels of carbon dioxide in the Earth's atmosphere have led to ocean acidification, which has negative effects on marine ecosystems. The international community has agreed that restoring the chemistry of the oceans is a priority, and efforts toward this goal are tracked as part of Sustainable Development Goal 14. Due to the interrelatedness of the ocean, chemical oceanographers frequently work on problems relevant to physical oceanography, geology and geochemistry, biology and biochemistry, and atmospheric science. Many of them are investigating biogeochemical cycles, and the marine carbon cycle in particular attracts significant interest due to its role in carbon sequestration and ocean acidification. Other major topics of interest include analytical chemistry of the oceans, marine pollution, and anthropogenic climate change. Organic compounds in the oceans Dissolved Organic Matter (DOM) DOM is a critical component of the ocean's carbon pool and includes many molecules such as amino acids, sugars, and lipids. It represents about 90% of the total organic carbon in marine environments. Colored dissolved organic matter (CDOM) is estimated to range from 20-70% of the carbon content of the oceans, being higher near river outlets and lower in the open ocean. DOM can be recycled and put back into the food web through a process called microbial loop which is essential for nutrient cycling and supporting primary productivity. It also plays a vital role in the global regulation of oceanic carbon storage, as some forms resist microbial degradation and may exist within the ocean for centuries. Marine life is similar mainly in biochemistry to terrestrial organisms, and is the most prolific source of halogenated organic compounds. Particulate Organic Matter (POM) POM includes of large organic particles, such as organisms, fecal pellets, and detritus, which settle through the water column. It is a major component of the biological pump, a process by which carbon is transferred from the surface ocean to the deep sea. As POM sinks, it decomposes by bacterial activity , releasing nutrients and carbon dioxide. The refractory POM fraction can settle on the ocean floor and make relevant contributions to carbon sequestration over a very long period of time Chemical ecology of extremophiles The ocean is home to a variety of marine organisms known as extremophiles – organisms that thrive in extreme conditions of temperature, pressure, and light availability. Extremophiles inhabit many unique habitats in the ocean, such as hydrothermal vents, black smokers, cold seeps, hypersaline regions, and sea ice brine pockets. Some scientists have speculated that life may have evolved from hydrothermal vents in the ocean.In hydrothermal vents and similar environments, many extremophiles acquire energy through chemoautotrophy, using chemical compounds as energy sources, rather than light as in photoautotrophy. Hydrothermal vents enrich the nearby environment in chemicals such as elemental sulfur, H2, H2S, Fe2+, and methane. Chemoautotrophic organisms, primarily prokaryotes, derive energy from these chemicals through redox reactions. These organisms then serve as food sources for higher trophic levels, forming the basis of unique ecosystems. Several different metabolisms are present in hydrothermal vent ecosystems. Many marine microorganisms, including Thiomicrospira, Halothiobacillus, and Beggiatoa, are capable of oxidizing sulfur compounds, including elemental sulfur and the often toxic compound H2S. H2S is abundant in hydrothermal vents, formed through interactions between seawater and rock at the high temperatures found within vents. This compound is a major energy source, forming the basis of the sulfur cycle in hydrothermal vent ecosystems. In the colder waters surrounding vents, sulfur-oxidation can occur using oxygen as an electron acceptor; closer to the vents, organisms must use alternate metabolic pathways or utilize another electron acceptor, such as nitrate. Some species of Thiomicrospira can utilize thiosulfate as an electron donor, producing elemental sulfur. Additionally, many marine microorganisms are capable of iron-oxidation, such as Mariprofundus ferrooxydans. Iron-oxidation can be oxic, occurring in oxygen-rich parts of the ocean, or anoxic, requiring either an electron acceptor such as nitrate or light energy. In iron-oxidation, Fe(II) is used as an electron donor; conversely, iron-reducers utilize Fe(III) as an electron acceptor. These two metabolisms form the basis of the iron-redox cycle and may have contributed to banded iron formations. At another extreme, some marine extremophiles inhabit sea ice brine pockets where temperature is very low and salinity is very high. Organisms trapped within freezing sea ice must adapt to a rapid change in salinity up to 3 times higher than that of regular seawater, as well as the rapid change to regular seawater salinity when ice melts. Most brine-pocket dwelling organisms are photosynthetic, therefore, these microenvironments can become hyperoxic, which can be toxic to its inhabitants. Thus, these extremophiles often produce high levels of antioxidants. Plate tectonics Seafloor spreading on mid-ocean ridges is a global scale ion-exchange system. Hydrothermal vents at spreading centers introduce various amounts of iron, sulfur, manganese, silicon and other elements into the ocean, some of which are recycled into the ocean crust. Helium-3, an isotope that accompanies volcanism from the mantle, is emitted by hydrothermal vents and can be detected in plumes within the ocean. Spreading rates on mid-ocean ridges vary between 10 and 200 mm/yr. Rapid spreading rates cause increased basalt reactions with seawater. The magnesium/calcium ratio will be lower because more magnesium ions are being removed from seawater and consumed by the rock, and more calcium ions are being removed from the rock and released to seawater. Hydrothermal activity at ridge crest is efficient in removing magnesium. A lower Mg/Ca ratio favors the precipitation of low-Mg calcite polymorphs of calcium carbonate (calcite seas). Slow spreading at mid-ocean ridges has the opposite effect and will result in a higher Mg/Ca ratio favoring the precipitation of aragonite and high-Mg calcite polymorphs of calcium carbonate (aragonite seas). Experiments show that most modern high-Mg calcite organisms would have been low-Mg calcite in past calcite seas, meaning that the Mg/Ca ratio in an organism's skeleton varies with the Mg/Ca ratio of the seawater in which it was grown. The mineralogy of reef-building and sediment-producing organisms is thus regulated by chemical reactions occurring along the mid-ocean ridge, the rate of which is controlled by the rate of sea-floor spreading. Human impacts Marine pollution Climate change Increased carbon dioxide levels, mostly from burning fossil fuels, are changing ocean chemistry. Global warming and changes in salinity have significant implications for the ecology of marine environments. Acidification Deoxygenation History Early inquiries about marine chemistry usually concerned the origin of salinity in the ocean, including work by Robert Boyle. Modern chemical oceanography began as a field with the 1872–1876 Challenger expedition, led by the British Royal Navy which made the first systematic measurements of ocean chemistry. The chemical analysis of these samples providing the first systematic study of the composition of seawater was conducted by John Murray and George Forchhammer, leading to a better understanding of elements like chloride, sodium, and sulfate in ocean waters The early 20th century saw significant advancements in marine chemistry, particularly with more accurate analytical techniques. Scientists like Martin Knudsen created the Knudsen Bottle, an instrument used to collect water samples from different ocean depths. Over the past three decades (1970s, 19802, and 1990s), a comprehensive evaluation of advancements in chemical oceanography was compiled through a National Science Foundation initiative known as Futures of Ocean Chemistry in the United States (FOCUS). This project brought together numerous prominent chemical oceanographers, marine chemists, and geochemists to contribute to the FOCUS report. After World War II, advancements in geochemical techniques propelled marine chemistry into a new era. Researchers began using isotopic analysis to study ocean circulation and the carbon cycle. Roger Revelle and Hans Suess pioneered using radiocarbon dating to investigate oceanic carbon reservoirs and their exchange with the atmosphere. Since the 1970s, the development of highly sophisticated instruments and computational models has revolutionized marine chemistry. Scientists can now measure trace metals, organic compounds, and isotopic ratios with unprecedented precision. Studies of marine biogeochemical cycles, including the carbon, nitrogen, and sulfur cycles, have become an area of interest to understand global climate change. The use of remote sensing technology and global ocean observation programs, such as the International Geosphere-Biosphere Programme (IGBP), has provided large-scale data on ocean chemistry, allowing scientists to monitor ocean acidification, deoxygenation, and other critical issues affecting the marine environment. Tools used for analysis Chemical oceanographers collect and measure chemicals in seawater, using the standard toolset of analytical chemistry as well as instruments like pH meters, electrical conductivity meters, fluorometers, and dissolved CO₂ meters. Most data are collected through shipboard measurements and from autonomous floats or buoys, but remote sensing is used as well. On an oceanographic research vessel, a CTD is used to measure electrical conductivity, temperature, and pressure, and is often mounted on a rosette of Nansen bottles to collect seawater for analysis. Sediments are commonly studied with a box corer or a sediment trap, and older sediments may be recovered by scientific drilling. Advanced analytical equipment such as mass spectrometers and chromatographs are applied to detect trace elements, isotopes, and organic compounds. This allows for precisely measuring nutrients, gases, and pollutants in marine environments. In recent years, autonomous underwater vehicles (AUVs) and remote sensing technology have enabled continuous, large-scale ocean chemistry monitoring, particularly for tracking changes in ocean acidification and nutrient cycles. Marine chemistry on other planets and their moons The chemistry of the subsurface ocean of Europa may be Earthlike. The subsurface ocean of Enceladus vents hydrogen and carbon dioxide to space.
Physical sciences
Oceanography
Earth science
43282386
https://en.wikipedia.org/wiki/Corsican%20donkey
Corsican donkey
The Corsican Donkey, , , is a breed of domestic donkey from the Mediterranean island of Corsica, a région and territorial collectivity of France. It is not recognised by the Ministère de l'agriculture, de l'agroalimentaire et de la forêt, the French ministry of agriculture, or by the Haras Nationaux, the French national stud; nor is it reported to the DAD-IS database of the FAO. Its numbers have fallen alarmingly; two associations are seeking its official recognition as a breed. History The indigenous Corsican donkey is small and usually grey, and it is thought to have been present on the island since Roman times. In modern times attempts have been made to increase its size by cross-breeding with imported stock including the Catalan donkey from Spain, donkeys from the French mainland, and the Martina Franca donkey from Puglia in Italy. A larger black type of donkey, standing has developed. Before the mechanisation of transport and agriculture in the 1930s there were more than 20,000 donkeys in Corsica. Until the 1960s large numbers were sold at miserable prices to the meat markets of Italy and mainland France; there is no tradition of eating donkey meat in Corsica, and the recent appearance of donkey salami in shops there is a consequence of tourist demand. The current population of the Corsican Donkey is estimated at about 1000; its conservation status was listed as "critical" by the SAVE Foundation in 2008. Two associations, A Runcata ("the bray") and Isul'âne, have been formed for its protection, and the first steps towards seeking official recognition for the breed were taken in 2010.
Biology and health sciences
Donkeys
Animals
31961126
https://en.wikipedia.org/wiki/Ocean%20temperature
Ocean temperature
The ocean temperature plays a crucial role in the global climate system, ocean currents and for marine habitats. It varies depending on depth, geographical location and season. Not only does the temperature differ in seawater, so does the salinity. Warm surface water is generally saltier than the cooler deep or polar waters. In polar regions, the upper layers of ocean water are cold and fresh. Deep ocean water is cold, salty water found deep below the surface of Earth's oceans. This water has a uniform temperature of around 0-3°C. The ocean temperature also depends on the amount of solar radiation falling on its surface. In the tropics, with the Sun nearly overhead, the temperature of the surface layers can rise to over . Near the poles the temperature in equilibrium with the sea ice is about . There is a continuous large-scale circulation of water in the oceans. One part of it is the thermohaline circulation (THC). It is driven by global density gradients created by surface heat and freshwater fluxes. Warm surface currents cool as they move away from the tropics. This happens as the water becomes denser and sinks. Changes in temperature and density move the cold water back towards the equator as a deep sea current. Then it eventually wells up again towards the surface. Ocean temperature as a term applies to the temperature in the ocean at any depth. It can also apply specifically to the ocean temperatures that are not near the surface. In this case it is synonymous with deep ocean temperature). It is clear that the oceans are warming as a result of climate change and this rate of warming is increasing. The upper ocean (above 700 m) is warming fastest, but the warming trend extends throughout the ocean. In 2022, the global ocean was the hottest ever recorded by humans. Definition and types Sea surface temperature Deep ocean temperature Experts refer to the temperature further below the surface as ocean temperature or deep ocean temperature. Ocean temperatures more than 20 metres below the surface vary by region and time. They contribute to variations in ocean heat content and ocean stratification. The increase of both ocean surface temperature and deeper ocean temperature is an important effect of climate change on oceans. Deep ocean water is the name for cold, salty water found deep below the surface of Earth's oceans. Deep ocean water makes up about 90% of the volume of the oceans. Deep ocean water has a very uniform temperature of around 0-3°C. Its salinity is about 3.5% or 35 ppt (parts per thousand). Relevance Ocean temperature and dissolved oxygen concentrations have a big influence on many aspects of the ocean. These two key parameters affect the ocean's primary productivity, the oceanic carbon cycle, nutrient cycles, and marine ecosystems. They work in conjunction with salinity and density to control a range of processes. These include mixing versus stratification, ocean currents and the thermohaline circulation. Ocean heat content Experts calculate ocean heat content by using ocean temperatures at different depths. Measurements There are various ways to measure ocean temperature. Below the sea surface, it is important to refer to the specific depth of measurement as well as measuring the general temperature. The reason is there is a lot of variation with depths. This is especially the case during the day. At this time low wind speed and a lot of sunshine may lead to the formation of a warm layer at the ocean surface and big changes in temperature as you get deeper. Experts call these strong daytime vertical temperature gradients a diurnal thermocline. The basic technique involves lowering a device to measure temperature and other parameters electronically. This device is called CTD which stands for conductivity, temperature, and depth. It continuously sends the data up to the ship via a conducting cable. This device is usually mounted on a frame that includes water sampling bottles. Since the 2010s autonomous vehicles such as gliders or mini-submersibles have been increasingly available. They carry the same CTD sensors, but operate independently of a research ship. Scientists can deploy CTD systems from research ships on moorings gliders and even on seals. With research ships they receive data through the conducting cable. For the other methods they use telemetry. There are other ways of measuring sea surface temperature. At this near-surface layer measurements are possible using thermometers or satellites with spectroscopy. Weather satellites have been available to determine this parameter since 1967. Scientists created the first global composites during 1970. The Advanced Very High Resolution Radiometer (AVHRR) is widely used to measure sea surface temperature from space. There are various devices to measure ocean temperatures at different depths. These include the Nansen bottle, bathythermograph, CTD, or ocean acoustic tomography. Moored and drifting buoys also measure sea surface temperatures. Examples are those deployed by the Global Drifter Program and the National Data Buoy Center. The World Ocean Database Project is the largest database for temperature profiles from all of the world’s oceans. A small test fleet of deep Argo floats aims to extend the measurement capability down to about 6000 meters. It will accurately sample temperature for a majority of the ocean volume once it is in full use. The most frequent measurement technique on ships and buoys is thermistors and mercury thermometers. Scientists often use mercury thermometers to measure the temperature of surface waters. They can put them in buckets dropped over the side of a ship. To measure deeper temperatures they put them on Nansen bottles. Monitoring through Argo program Ocean warming Trends Causes The cause of recent observed changes is the warming of the Earth due to human-caused emissions of greenhouse gases such as carbon dioxide and methane. Growing concentrations of greenhouse gases increases Earth's energy imbalance, further warming surface temperatures. The ocean takes up most of the added heat in the climate system, raising ocean temperatures. Main physical effects Increased stratification and lower oxygen levels Higher air temperatures warm the ocean surface. And this leads to greater ocean stratification. Reduced mixing of the ocean layers stabilises warm water near the surface. At the same time it reduces cold, deep water circulation. The reduced up and down mixing reduces the ability of the ocean to absorb heat. This directs a larger fraction of future warming toward the atmosphere and land. Energy available for tropical cyclones and other storms is likely to increase. Nutrients for fish in the upper ocean layers are set to decrease. This is also like to reduce the capacity of the oceans to store carbon. Warmer water cannot contain as much oxygen as cold water. Increased thermal stratification may reduce the supply of oxygen from the surface waters to deeper waters. This would further decrease the water's oxygen content. This process is called ocean deoxygenation. The ocean has already lost oxygen throughout the water column. Oxygen minimum zones are expanding worldwide. Changing ocean currents Varying temperatures associated with sunlight and air temperatures at different latitudes cause ocean currents. Prevailing winds and the different densities of saline and fresh water are another cause of currents. Air tends to be warmed and thus rise near the equator, then cool and thus sink slightly further poleward. Near the poles, cool air sinks, but is warmed and rises as it then travels along the surface equatorward. The sinking and upwelling that occur in lower latitudes, and the driving force of the winds on surface water, mean the ocean currents circulate water throughout the entire sea. Global warming on top of these processes causes changes to currents, especially in the regions where deep water is formed. In the geologic past Scientists believe the sea temperature was much hotter in the Precambrian period. Such temperature reconstructions derive from oxygen and silicon isotopes from rock samples. These reconstructions suggest the ocean had a temperature of 55–85 °C . It then cooled to milder temperatures of between 10 and 40 °C by . Reconstructed proteins from Precambrian organisms also provide evidence that the ancient world was much warmer than today. The Cambrian Explosion approximately 538.8 million years ago was a key event in the evolution of life on Earth. This event took place at a time when scientists believe sea surface temperatures reached about 60 °C. Such high temperatures are above the upper thermal limit of 38 °C for modern marine invertebrates. They preclude a major biological revolution. During the later Cretaceous period, from , average global temperatures reached their highest level in the last 200 million years or so. This was probably the result of the configuration of the continents during this period. It allowed for improved circulation in the oceans. This discouraged the formation of large scale ice sheet. Data from an oxygen isotope database indicate that there have been seven global warming events during the geologic past. These include the Late Cambrian, Early Triassic, Late Cretaceous, and Paleocene-Eocene transition. The surface of the sea was about 5-30º warmer than today in these warming period.
Physical sciences
Oceanography
Earth science
21942008
https://en.wikipedia.org/wiki/Cell%20polarity
Cell polarity
Cell polarity refers to spatial differences in shape, structure, and function within a cell. Almost all cell types exhibit some form of polarity, which enables them to carry out specialized functions. Classical examples of polarized cells are described below, including epithelial cells with apical-basal polarity, neurons in which signals propagate in one direction from dendrites to axons, and migrating cells. Furthermore, cell polarity is important during many types of asymmetric cell division to set up functional asymmetries between daughter cells. Many of the key molecular players implicated in cell polarity are well conserved. For example, in metazoan cells, the complex plays a fundamental role in cell polarity. While the biochemical details may vary, some of the core principles such as negative and/or positive feedback between different molecules are common and essential to many known polarity systems. Examples of polarized cells Epithelial cells Epithelial cells adhere to one another through tight junctions, desmosomes and adherens junctions, forming sheets of cells that line the surface of the animal body and internal cavities (e.g., digestive tract and circulatory system). These cells have an apical-basal polarity defined by the apical membrane facing the outside surface of the body, or the lumen of internal cavities, and the basolateral membrane oriented away from the lumen. The basolateral membrane refers to both the lateral membrane where cell-cell junctions connect neighboring cells and to the basal membrane where cells are attached to the basement membrane, a thin sheet of extracellular matrix proteins that separates the epithelial sheet from underlying cells and connective tissue. Epithelial cells also exhibit planar cell polarity, in which specialized structures are orientated within the plane of the epithelial sheet. Some examples of planar cell polarity include the scales of fish being oriented in the same direction and similarly the feathers of birds, the fur of mammals, and the cuticular projections (sensory hairs, etc.) on the bodies and appendages of flies and other insects. Computational models have been suggested to simulate how a group of epithelial cells can form a variety of biological morphologies. Neurons A neuron receives signals from neighboring cells through branched, cellular extensions called dendrites. The neuron then propagates an electrical signal down a specialized axon extension from the basal pole to the synapse, where neurotransmitters are released to propagate the signal to another neuron or effector cell (e.g., muscle or gland). The polarity of the neuron thus facilitates the directional flow of information, which is required for communication between neurons and effector cells. Migratory cells Many cell types are capable of migration, such as leukocytes and fibroblasts, and in order for these cells to move in one direction, they must have a defined front and rear. At the front of the cell is the leading edge, which is often defined by a flat ruffling of the cell membrane called the lamellipodium or thin protrusions called filopodia. Here, actin polymerization in the direction of migration allows cells to extend the leading edge of the cell and to attach to the surface. At the rear of the cell, adhesions are disassembled and bundles of actin microfilaments, called stress fibers, contract and pull the trailing edge forward to keep up with the rest of the cell. Without this front-rear polarity, cells would be unable to coordinate directed migration. Budding yeast The budding yeast, Saccharomyces cerevisiae, is a model system for eukaryotic biology in which many of the fundamental elements of polarity development have been elucidated. Yeast cells share many features of cell polarity with other organisms, but feature fewer protein components. In yeast, polarity is biased to form at an inherited landmark, a patch of the protein Rsr1 in the case of budding, or a patch of Rax1 in mating projections. In the absence of polarity landmarks (i.e. in gene deletion mutants), cells can perform spontaneous symmetry breaking, in which the location of the polarity site is determined randomly. Spontaneous polarization still generates only a single bud site, which has been explained by positive feedback increasing polarity protein concentrations locally at the largest polarity patch while decreasing polarity proteins globally by depleting them. The master regulator of polarity in yeast is Cdc42, which is a member of the eukaryotic Ras-homologous Rho-family of GTPases, and a member of the super-family of small GTPases, which include Rop GTPases in plants and small GTPases in prokaryotes. For polarity sites to form, Cdc42 must be present and capable of cycling GTP, a process regulated by its guanine nucleotide exchange factor (GEF), Cdc24, and by its GTPase-activating proteins (GAPs). Cdc42 localization is further regulated by cell cycle queues, and a number of binding partners. A recent study to elucidate the connection between cell cycle timing and Cdc42 accumulation in the bud site uses optogenetics to control protein localization using light. During mating, these polarity sites can relocate. Mathematical modeling coupled with imaging experiments suggest the relocation is mediated by actin-driven vesicle delivery. Vertebrate development The bodies of vertebrate animals are asymmetric along three axes: anterior-posterior (head to tail), dorsal-ventral (spine to belly), and left-right (for example, our heart is on the left side of our body). These polarities arise within the developing embryo through a combination of several processes: 1) asymmetric cell division, in which two daughter cells receive different amounts of cellular material (e.g. mRNA, proteins), 2) asymmetric localization of specific proteins or RNAs within cells (which is often mediated by the cytoskeleton), 3) concentration gradients of secreted proteins across the embryo such as Wnt, Nodal, and Bone Morphogenic Proteins (BMPs), and 4) differential expression of membrane receptors and ligands that cause lateral inhibition, in which the receptor-expressing cell adopts one fate and its neighbors another. In addition to defining asymmetric axes in the adult organism, cell polarity also regulates both individual and collective cell movements during embryonic development such as apical constriction, invagination, and epiboly. These movements are critical for shaping the embryo and creating the complex structures of the adult body. Molecular basis Cell polarity arises primarily through the localization of specific proteins to specific areas of the cell membrane. This localization often requires both the recruitment of cytoplasmic proteins to the cell membrane and polarized vesicle transport along cytoskeletal filaments to deliver transmembrane proteins from the golgi apparatus. Many of the molecules responsible for regulating cell polarity are conserved across cell types and throughout metazoan species. Examples include the PAR complex (Cdc42, PAR3/ASIP, PAR6, atypical protein kinase C), Crumbs complex (Crb, PALS, PATJ, Lin7), and Scribble complex (Scrib, Dlg, Lgl). These polarity complexes are localized at the cytoplasmic side of the cell membrane, asymmetrically within cells. For example, in epithelial cells the PAR and Crumbs complexes are localized along the apical membrane and the Scribble complex along the lateral membrane. Together with a group of signaling molecules called Rho GTPases, these polarity complexes can regulate vesicle transport and also control the localization of cytoplasmic proteins primarily by regulating the phosphorylation of phospholipids called phosphoinositides. Phosphoinositides serve as docking sites for proteins at the cell membrane, and their state of phosphorylation determines which proteins can bind. Polarity establishment While many of the key polarity proteins are well conserved, different mechanisms exist to establish cell polarity in different cell types. Here, two main classes can be distinguished: (1) cells that are able to polarize spontaneously, and (2) cells that establish polarity based on intrinsic or environmental cues. Spontaneous symmetry breaking can be explained by amplification of stochastic fluctuations of molecules due to non-linear chemical kinetics. The mathematical basis for this biological phenomenon was established by Alan Turing in his 1953 paper 'The chemical basis of morphogenesis.' While Turing initially attempted to explain pattern formation in a multicellular system, similar mechanisms can also be applied to intracellular pattern formation. Briefly, if a network of at least two interacting chemicals (in this case, proteins) exhibits certain types of reaction kinetics, as well as differential diffusion, stochastic concentration fluctuations can give rise to the formation of large-scale stable patterns, thus bridging from a molecular length scale to a cellular or even tissue scale. A prime example for the second type of polarity establishment, which relies on extracellular or intracellular cues, is the C. elegans zygote. Here, mutual inhibition between two sets of proteins guides polarity establishment and maintenance. On the one hand, PAR-3, PAR-6 and aPKC (called anterior PAR proteins) occupy both the plasma membrane and cytoplasm prior to symmetry breaking. PAR-1, the C. elegans-specific ring-finger-containing protein PAR-2, and LGL-1 (called posterior PAR proteins) are present mostly in the cytoplasm. The male centrosome provides a cue, which breaks an initially homogenous membrane distribution of anterior PARs by inducing cortical flows. These are thought to advect anterior PARs towards one side of the cell, allowing posterior PARs to bind to other pole (posterior). Anterior and posterior PAR proteins then maintain polarity until cytokinesis by mutually excluding each other from their respective cell membrane areas.
Biology and health sciences
Cell processes
Biology
40364158
https://en.wikipedia.org/wiki/Antibiotic%20use%20in%20livestock
Antibiotic use in livestock
Antibiotic use in livestock is the use of antibiotics for any purpose in the husbandry of livestock, which includes treatment when ill (therapeutic), treatment of a group of animals when at least one is diagnosed with clinical infection (metaphylaxis), and preventative treatment (prophylaxis). Antibiotics are an important tool to treat animal as well as human disease, safeguard animal health and welfare, and support food safety. However, used irresponsibly, this may lead to antibiotic resistance which may impact human, animal and environmental health. While levels of use vary dramatically from country to country, for example some Northern European countries use very low quantities to treat animals compared with humans, worldwide an estimated 73% of antimicrobials (mainly antibiotics) are consumed by farm animals. Furthermore, a 2015 study also estimates that global agricultural antibiotic usage will increase by 67% from 2010 to 2030, mainly from increases in use in developing BRIC countries. Increased antibiotic use is a matter of concern as antibiotic resistance is considered to be a serious threat to human and animal welfare in the future, and growing levels of antibiotics or antibiotic-resistant bacteria in the environment could increase the numbers of drug-resistant infections in both. Bacterial diseases are a leading cause of death and a future without effective antibiotics would fundamentally change the way modern human as well as veterinary medicine is practised. However, legislation and other curbs on antibiotic use in farm animals are now being introduced across the globe. In 2017, the World Health Organization strongly suggested reducing antibiotic use in animals used in the food industry. The use of antibiotics for growth promotion purposes was banned in the European Union from 2006, and the use of sub-therapeutic doses of medically important antibiotics in animal feed and water to promote growth and improve feed efficiency became illegal in the United States on 1 January 2017, through regulatory change enacted by the Food and Drug Administration (FDA), which sought voluntary compliance from drug manufacturers to re-label their antibiotics. History The 2018 book 'Pharming animals: a global history of antibiotics in food production (1935–2017)' summarises the central role antibiotics have played in agriculture: "Since their advent during the 1930s, antibiotics have not only had a dramatic impact on human medicine, but also on food production. On farms, whaling and fishing fleets as well as in processing plants and aquaculture operations, antibiotics were used to treat and prevent disease, increase feed conversion, and preserve food. Their rapid diffusion into nearly all areas of food production and processing was initially viewed as a story of progress on both sides of the Iron Curtain." To retrace, while natural antibiotics or antibacterials were known to ancient man, antibiotics as we know them came to the fore during World War II to help treat war time casualties. It is recorded that antibiotics were first used in farming towards the end of the war, in the form of intra-mammary penicillin preparations to treat bovine mastitis. At that time, milk was seen as an agricultural product which was highly susceptible to bacterial contamination, and farmers welcomed the opportunity to 'purify' their produce for the safety of consumers; it was only later that concern switched from the bacterial load of the product to the residues that might result from untimely or unregulated treatment. The use of antibiotics to treat and prevent disease has followed a similar path to that used in human medicine in terms of therapeutic and metaphylactic applications to treat and manage disease and improve population health, and the application of case-by-case strategic preventative treatments when animals are deemed at particular risk. However, in the late 1940s, studies examining the supplementation of B12 in chicks' diets found that B12 produced from the fermentation of Streptomyces aureofaciens, an antibiotic for use in human medicine, produced a better weight gain for chicks than B12 supplied from other sources, and a reduced amount of feed to bring the birds to market weight. Further studies on other livestock species showed a similar improved growth and feed efficiency effect with the result that as the cost of antibiotics came down, they were increasingly included at low ('sub-therapeutic') levels in livestock feed as a means of increasing production of affordable animal protein to meet the needs of a rapidly-expanding post-war population. This development coincided with an increase in the scale of individual farms and the level of confinement of the animals on them, and so routine preventative antibiotic treatments became the most cost-effective means of treating the anticipated disease that could sometimes arise as a result. Veterinary medicine increasingly embraced the therapeutic, metaphylactic and strategic preventative use of antibiotics to treat disease. The routine use of antibiotics for growth stimulation and disease prevention also grew. Antibiotic usage in the UK has been banned since 2006 – however in 2017, 73% of all antibiotics sold globally were used in animals for food production. Growth stimulation In 1910 in the United States, a meat shortage resulted in protests and boycotts. After this and other shortages, the public demanded government research into stabilization of food supplies. Since the 1900s, livestock production on United States farms has had to rear larger quantities of animals over a short period of time to meet new consumer demands. It was discovered in the 1940s that feeding subtherapeutic levels of antibiotics improved feed efficiency and accelerated animal growth. Following this discovery, American Cyanamid published research establishing the practice of using antibiotic growth promoters. By 2001, this practice had grown so much that a report by the Union of Concerned Scientists found that nearly 90% of the total use of antimicrobials in the United States was for non-therapeutic purposes in agricultural production. Certain antibiotics, when given in low, subtherapeutic doses, are known to improve feed conversion efficiency (more output, such as muscle or milk, for a given amount of feed) and may promote greater growth, most likely by affecting gut flora. The drugs listed below can be used to increase feed conversion ratio and weight gain, but are not legally allowed to be used for such purposes any longer in the United States. Some drugs listed below are ionophores, which are coccidiostats and not classified as antibiotics in many countries; they have not been shown to increase risk of antibiotic-resistant infections in humans. The practice of using antibiotics for growth stimulation has been deemed problematic for these following reasons: It is the largest use of antimicrobials worldwide Subtherapeutic use of antibiotics results in bacterial resistance Every important class of antibiotics are being used in this way, making every class less effective The bacteria being changed harm humans Antibiotic resistance Mechanisms for the development of resistance Antibiotic resistance – often referred to as antimicrobial resistance (AMR) although this term covers anti-virals, anti-fungals and other products – can occur when antibiotics are present in concentrations too low to inhibit bacterial growth, triggering cellular responses in the bacteria that allow them to survive. These bacteria can then reproduce and spread their antibiotic-resistant genes to other generations, increasing their prevalence and leading to infections that cannot be healed by antibiotics. This is a growing matter of concern as antibiotic resistance is considered to be a serious future threat to human welfare. Infectious diseases are the third leading cause of death in Europe and a future without effective antibiotics would fundamentally change the way modern medicine is practised. Bacteria can alter their genetic inheritance through two main ways, either by mutating their genetic material or acquiring a new one from other bacteria. The latter being the most important for causing antibiotic-resistant bacteria strains in animals and humans. One of the methods bacteria can obtain new genes is through a process called conjugation which deals with transferring genes using plasmids. These conjugative plasmids carry a number of genes that can be assembled and rearranged, which could then enable bacteria to exchange beneficial genes among themselves ensuring their survival against antibiotics and rendering them ineffective to treat dangerous diseases in humans, resulting into multi-drug resistant organisms. However, antibiotic resistance also occurs naturally, as it is a bacterium's response to any threat. As a result, antibiotic-resistant bacteria have been found in pristine environments unrelated to human activity such as in the frozen and uncovered remains of woolly mammoths, in the polar ice caps and in isolated caves deep underground. High priority antibiotics The World Health Organization (WHO) published a revised list in 2019 of 'Critically Important Antimicrobials for Human Medicine, 6th revision' with the intent that it be used "as a reference to help formulate and prioritise risk assessment and risk management strategies for containing antimicrobial resistance due to human and non-human antimicrobial use to help preserve the effectiveness of currently available antimicrobials. It lists its Highest Priority Critically Important Antimicrobials as: 3rd, 4th and 5th generation cephalosporins, glycopeptides, macrolides and ketolides, polymyxins including colistin, and quinolones including fluoroqinolones. The European Medicines Agency (EMA) Antimicrobial Advice Ad Hoc Expert Group (AMEG) also published an updated categorisation of different antibiotics in veterinary medicine by the antibiotic resistance risk to humans of using them alongside the need to treat disease in animals for health and welfare reasons. The categorisation specifically focuses on the situation in Europe. Category A ('Avoid') antibiotics are designated as 'not appropriate for use in food producing animals'. Category B ('Restrict') products, also known as Highest Priority Critically Important Antibiotics, are only to be used as a last resort. These include quinolones (such as fluoroquinolones), 3rd and 4th generation cephalosporins, and polymyxins, including colistin. A new intermediate Category C ('Caution') has been created for antibiotics which should be used when there is no available product in Category D ('Prudence') that would be clinically effective. Category C includes macrolides and aminoglycosides, with the exception of spectinomycin, which remains in Category D. Evidence for the transfer of macrolide-resistant microorganisms from animals to humans has been scant, and most evidence shows that pathogens of concern in human populations originated in humans and are maintained there, with rare cases of transference to humans. Macrolides are also extremely useful in the effective treatment of some Mycoplasma species in poultry, Lawsonia in pigs, respiratory tract infections in cattle and in some circumstances, lameness in sheep. Sources of antibiotic resistance Summary While the human medical use of antibiotics is the main source of antibiotic resistant infections in humans, it is known that humans can acquire antibiotic resistance genes from a variety of animal sources, including farm animals, pets and wildlife. Three potential mechanisms by which agricultural antibiotic use could lead to human disease have been identified as: 1 - direct infection with resistant bacteria from an animal source; 2 - breaches in the species barrier followed by sustained transmission in humans of resistant strains arising in livestock; 3 - transfer of resistance genes from agriculture into human pathogens. While there is evidence of transmission of resistance from animals to humans in all three cases, either the scale is limited or causality is hard to establish. As Chang et al. (2014) state: "The topic of agricultural antibiotic use is complex. As we noted ... many believe that agricultural antibiotics have become a critical threat to human health. While the concern is not unwarranted, the extent of the problem may be exaggerated. There is no evidence that agriculture is 'largely to blame' for the increase in resistant strains and we should not be distracted from finding adequate ways to ensure appropriate antibiotic use in all settings, the most important of which being clinical medicine." Direct contact with animals In terms of direct infection with resistant bacteria from an animals source, studies have shown that direct contact with livestock can lead to the spread of antibiotic-resistant bacteria. The risk appears greatest in those handling or managing livestock, for example in a study where resistant bacteria were monitored in farm labourers and neighbours after chickens receiving an antibiotic in their feed. Manure may also contain antibiotic-resistant Staphylococcus aureus bacteria which can infect humans. In 2017, the WHO included methicillin-resistant S. aureus (MRSA) in its priority list of 12 antibiotic-resistant bacteria, urging the need to search for new and more effective antibiotics against it. There also has been an increase in the number of bacterial pathogens resistant to multiple antimicrobial agents, including MRSA, which have recently emerged into different lineages. Some of them are associated with livestock and on-farm companion animals that are then able to be transmitted to humans, also called livestock-associated methicillin-resistant Staphylococcus aureus (LA-MRSA). These new lineages can be found on soft tissues of livestock workers, for example in their noses. A study looked at the association between exposure to livestock and the occurrence of LA-MRSA infection and observed that LA-MRSA infection was 9.64 times as likely to be found among livestock workers and veterinarians compared to their unexposed families and community members, showing that exposure to livestock significantly increases the risk of developing MRSA infection. Although total numbers colonised by LA-MRSA remain low, and fewer still develop infection, the condition is nonetheless rising in prevalence, difficult to treat, and has become a public health concern. For companion animals without farm contact information about MRSA is still limited. Therefore, AMR surveillance should be extended, especially since antimicrobial use limitations for livestock are often extended to pets without further justification. Foodborne antibiotic resistance Another way humans can be exposed to antibiotic-resistant bacteria is by pathogens on food. In particular, If resistant bacteria are ingested by humans via food and then colonise the gut, they can cause infections which are unpleasant enough in themselves, but can be even harder to treat if they are serious enough to require antibiotic treatment but are also resistant to commonly-used antibiotics. Campylobacter, Salmonella, E. coli and Listeria species are the most common foodborne bacteria. Salmonella and Campylobacter alone account for over 400,000 Americans becoming sick from antibiotic-resistant infections every year. Dairy products, ground minced beef and poultry are among the most common foods that can harbour pathogens both resistant and susceptible to antibiotics, and surveillance of retail meats such as turkey, chicken, pork and beef have found Enterobacteriaceae. While some studies have established connections between antibiotic resistant infections and food-producing animals, others have struggled to establish causal links, even when examining plasmid-mediated resistance. Standard precautions such as pasteurising, or preparing and cooking meat properly, food preservation methods, and effective hand washing can help eliminate, decrease, or prevent spread of and infection from these and other potentially harmful bacteria. Other sources of resistance As well as via food, E. coli from a variety of sources can also cause urinary and bloodstream infections. While one study suggests a large proportion of resistant E. coli isolates causing bloodstream infections in people could emanate from livestock produced for food, other studies have since contradicted this, finding little commonality between resistance genes from livestock sources and those found in human infections, even when examining plasmid-mediated resistance. The use of antibiotics in livestock also has the potential to introduce antibiotic-resistant bacteria to humans via environmental exposure or inhalation of airborne bacteria. Antibiotics given to livestock in sub-therapeutic concentrations to stimulate growth when there is no diagnosis of disease – a practice still permitted in some countries – may kill some, but not all, of the bacterial organisms in the animal, possibly leaving those that are naturally antibiotic-resistant in the environment. Hence the practice of using antibiotics for growth stimulation could result in selection for resistance. Antibiotics are not fully digested and processed in the animal or human gut, therefore, an estimated 40–90% of the antibiotics ingested are excreted in urine and/or faeces. This means that as well as finding antibiotics in human sewage and animal manure, both can also contain antibiotic-resistant bacteria which have developed in vivo or in the environment. When animal manures are stored inadequately or applied as fertiliser, this can then spread bacteria to crops and into run-off water. Antibiotics have been found in small amounts in crops grown in fertilised fields, and detected in runoff from animal waste-fertilised land. Composting has been shown to reduce the presence of various antibiotics by 20–99%, but one study found that chlortetracycline, an antibiotic used in livestock feed in China, degraded at different rates dependent on the animal it was fed to, and that manure composting was not sufficient to ensure the microbial degradation of the antibiotic. Global positions on antibiotic use in farm animals In 2017, the World Health Organization (WHO) recommended reducing antibiotic use in animals used in the food industry. Due to the increasing risk of antibiotic resistant bacteria, the WHO strongly suggested restrictions on antibiotics being used for growth promotion and antibiotics used on healthy animals. Animals that require antibiotics should be treated with antibiotics that pose the smallest risk to human health. HSBC also produced a report in October 2018 warning that the use of antibiotics in meat production could have "devastating" consequences for humans. It noted that many dairy and meat producers in Asia and the Americas had an economic incentive to continue high usage of antibiotics, particularly in crowded or unsanitary living conditions. However, the World Organisation for Animal Health has acknowledged the need to protect antibiotics but argued against a total ban on antibiotic use in animal production. A total ban on antibiotics might drastically reduce protein supply in some parts of the world, and when use of antibiotics is reduced or eliminated in livestock through legislation or voluntarily, both animal health and welfare and economic impacts can be negatively affected. For example, experiences from farms where antibiotic use has been cut back or eliminated in the interests of meeting a consumer demand for 'antibiotic-free' or 'reared without antibiotics' produce have been shown to have a detrimental effect on animal health and welfare. When antibiotics are used sub-therapeutically (for animal performance, increased growth, and improved feed efficiency), then the costs of meat, eggs, and other animal products are lowered. One big argument against the restriction of antibiotic use is the potential economic hardship that would result for producers of livestock and poultry that could also result in higher cost for consumers. In a study analysing the economic cost of the FDA restricting all antibiotic use in animal livestock, it was estimated that the restriction would cost consumers approximately $1.2 billion to $2.5 billion per year. In order to determine the overall economic impact of restricting antibiotic use, the financial cost must be weighed against the health benefits to the population. Since it is difficult to estimate the value of potential health benefits, the study concluded that the complete economic impact of restricting antibiotic use has not yet been determined. Although quantifying health benefits may be difficult, the economic impact of antibiotic restriction in animals can also be evaluated through the economic impact of antibiotic resistance in humans, which is a significant outcome of antibiotic use in animals. The World Health Organization identifies antibiotic resistance as a contributor to longer hospital stays and higher medical costs. When infections can no longer be treated by typical first-line antibiotics, more expensive medications are required for treatment. When illness duration is extended by antibiotics resistance, the increased health care costs create a larger economic burden for families and societies. The Center for Infectious Disease Research and Policy estimates approximately $2.2 billion in antibiotic resistance- related healthcare costs each year. So while restricting antibiotics in animals causes a significant economic burden, the outcome of antibiotic resistance in humans that is perpetuated by antibiotic use in animals carries comparable economic burdens. Use and regulation by country The use of medicines to treat disease in food-producing animals is regulated in nearly all countries, although some countries prescription-control their antibiotics, meaning only qualified veterinary surgeons can prescribe and in some cases dispense them. Historically, the restrictions have existed to prevent contamination of mainly meat, milk, eggs and honey with chemicals that are in any way harmful to humans. Treating a sick animal with medicines may lead the animal product containing some of those medicines when the animal is slaughtered, milked, lays eggs or produces honey, unless withdrawal periods are adhered to which stipulate a period of time to ensure the medicines have left the animal's system sufficiently to avoid any risk. Scientific experiments provide data for each medicine in each application, showing how long it is present in the body of an animal and what the animal's body does to metabolise the medicine. By the use of 'drug withdrawal periods' before slaughter or the use of milk or eggs from treated animals, veterinarians and animal owners ensure that the meat, milk and eggs is safe and free of any contamination. However, some countries have also banned or heavily controlled routine use of antibiotics for growth stimulation or the preventative control of disease arising from deficiencies in management or facilities. This is not over concerns about residues, but about the growth of antibiotic resistance. Brazil Brazil is the world's largest exporter of beef. The government regulates antibiotic use in the cattle production industry. The beef cattle industry in Brazil is based on grass-fed animals in which the Nellore breed predominates. The volume of antimicrobials used is not officially published in Brazil. Case studies conducted on farms in Brazil are the only way to get estimates and data of antimicrobial use. National Action Plan on Antimicrobial Resistance in Agriculture was set in place to contain the rise in antimicrobial resistance and limit the use of antibiotics in livestock production. Not all antimicrobials are banned in Brazil; treatment for therapeutic, metaphylactic, and prophylactic reasons are allowed. Canada Because of concerns about antibiotics residues getting into the milk or meat of cattle, the Canadian Food Inspection Agency (CFIA) enforces standards which protect consumers by ensuring that foods produced will not contain antibiotics at a level which will cause harm to consumers. In Canada the veterinary drug regulation consists of two federal government agencies, namely Health Canada and the CFIA, which are responsible for implementing and enforcing the Food and Drugs Act. Testing samples for drug residues include three methods: monitoring, surveillance, and compliance. There are Swab Test On Premises (STOP) procedures to detect antibiotic residues in kidney tissues. China China produces and consumes the most antibiotics of all countries. Antibiotic use has been measured by checking the water near factory farms in China as well as through animal faeces. It was calculated that 38.5 million kg (or 84.9 million lbs) of antibiotics were used in China's swine and poultry production in 2012. The abuse of antibiotics caused severe pollution of soil and surface water in Northern China. In 2012, U.S. News & World Report described the Chinese government's regulation of antibiotics in livestock production as "weak". On the UK 5-Year Antimicrobial Resistance (AMR) Strategy 2013–2018, the importance of addressing AMR negative effects on animal health has been considered as same as human health. Several scientific partnerships with low-middle income countries would be established. UK-China Newton fund has started to build multi-discipline collaboration cross the border to stop the increasing global burden caused by AMR. To achieve the goal of citizen public health and food safety, "The National action Plan on Controlling Antibiotic-Resistance Bacteria on animal origins (2016–2020)" has been published by Ministry of Agriculture and Rural Affairs of People's Republic of China since 2017. This plan is fully integrated with the concept of one health. It covers not only the research and development, but also social context. European Union In 1999, the European Union (EU) implemented an antibiotic resistance monitoring program and a plan to phase out antibiotic use for the purposes of growth promotion by 2006. The European Union banned the use of antibiotics as growth agents starting on 1 January 2006 with Regulation (EC) No 1831/2003. In Germany, 1,734 tons of antimicrobial agents were used for animals in 2011 compared with 800 tons for humans. Sweden was the first country to ban all use of antibiotics as growth promoters in 1986 and played a big role in the EU-wide ban on antimicrobial use by extensive lobbying after joining EU in 1995. Another strategy actively applied in Sweden is prudent usage of antibiotics by performing individual rather than group treatment (on average more than 90% of treatment is individual with tablets, injectables or intramammaries). Denmark started cutting down drastically in 1994, now using 60% less. In the Netherlands, the use of antibiotics to treat diseases increased after the ban on its use for growth purposes in 2006. In 2011, the European Parliament voted for a non-binding resolution that called for the end of the preventative use of antibiotics in livestock. A revised regulation on veterinary medicinal products, proposed in procedure 2014/0257/COD, proposed limiting the use of antibiotics in prophylaxis and metaphylaxis. An agreement on the regulation between the Council of the European Union and the European Parliament was confirmed on 13 June 2018, and the new Veterinary Medicines Regulation (Regulation (EU) 2019/6) is due to come into effect on 28 January 2022. India In 2011 the Indian government proposed a "National policy for containment of antimicrobial resistance". Other policies set schedules for requiring that food producing animals not be given antibiotics for a certain amount of time before their food goes to market. A study released by Centre for Science and Environment (CSE) on 30 July 2014 found antibiotic residues in chicken. This study claims that Indians are developing resistance to antibiotics – and hence falling prey to a host of otherwise curable ailments. Some of this resistance might be due to large-scale unregulated use of antibiotics in the poultry industry. CSE finds that India has not set any limits for antibiotic residues in chicken and says that India will have to implement a comprehensive set of regulations including banning of antibiotic use as growth promoters in the poultry industry. Not doing this will put lives of people at risk. New Zealand In 1999 the New Zealand government issued a statement that they would not then ban the use of antibiotics in livestock production. In 2007 ABC Online reported on antibiotic use in chicken production in New Zealand. In 2017, New Zealand published a new action plan to address the ongoing concern of antimicrobial resistance (AMR). The action plan outlined five objectives with each objective looking both at AMR in humans and AMR in agriculture. Compared to other countries, New Zealand has a very low prevalence of AMR in animals and plants. This is due to their low use of antibiotics in animal treatment. South Korea In 1998 some researchers reported use in livestock production was a factor in the high prevalence of antibiotic-resistant bacteria in Korea. In 2007 The Korea Times noted that Korea has relatively high usage of antibiotics in livestock production. In 2011, the Korean government banned the use of antibiotics as growth promoters in livestock. United Kingdom As with other countries in Europe, use of antibiotics for growth promotion was banned in 2006. Less than one third of all antibiotics sold in the UK are now estimated to be used to treat or prevent disease in farmed animals, following a revision to the 2017 sales data published by the UK Government's Veterinary Medicines Directorate. Furthermore, 2018 sales data estimated use at 29.5 mg antibiotics per kg of animal at time of treatment during that year. This represents a 53% reduction in sales of antibiotics to treat food-producing animals over five years. The reduction has largely been achieved without legislation, and has been credited to voluntary industry action coordinated by the Responsible Use of Medicines in Agriculture (RUMA) Alliance through a 'Targets Task Force' comprising a prominent veterinary surgeon and farmer from each livestock enterprise. A European comparison of 2017 sales data found the UK had the fifth lowest sales in Europe during that year, with 2018 comparisons due to be released towards the end of 2020. While sales data give an overview of levels of use, products are often licensed for use in many species and therefore it is not possible to determine levels of use in different species without more specific usage data from each sector. In 2011, British Poultry Council members, representing 90% of the UK poultry meat industry, formed a stewardship programme that started recording antibiotics used to treat birds in the poultry meat sector in 2012. The first report was published in 2016 and reported a 44% reduction in antibiotic use between 2012 and 2015. Since then, the organisation has produced three further reports, with the 2019 report confirming that the sector is maintaining reductions of over 80% in total use since it started its stewardship group, as well as reducing use of Highest Priority Critically Important Antibiotics by over 80% by stopping use of 3rd and 4th generation cephalosporins in 2012 and colistin in 2016, and only using macrolides and fluoroquinolones as a last resort. Preventative use of antibiotics has also stopped. As many products are licensed for use in poultry and pigs, the increasing transparency around use in the UK poultry meat sector motivated the UK pig sector to set up a stewardship programme in 2016 through the National Pig Association. In 2017, an electronic Medicine Book for pigs (eMB-Pigs) was launched by levy body Agriculture and Horticulture Development Board. eMB-Pigs provides a centralised electronic version of the existing paper or electronic medicine book kept on farms, and allows pig producers to record and quantify their individual use of medicines for easy review with the veterinary surgeon, at the same time as capturing use on each farm so that data can be collated to provide national usage figures. After it became a requirement of Red Tractor farm assurance for pigs that annual, aggregated records of antibiotic use must be logged on the eMB system, data released May 2018 showed that according to records covering 87% of the UK slaughter pig population, antibiotic use had halved between 2015 and 2017, Data for 2018 confirms that overall antibiotic use in the UK pig sector fell further, by 60% from the estimated 2015 figure, to 110 mg/kg. Use of Highest Priority Critically Important Antibiotics also fell to 0.06 mg/kg, a reduction of 95% from 2015, with use of colistin almost nil. Factors such as levels of infectious disease domestically or internationally, weather and vaccine availability can all affect antibiotic use. For example, the Scottish salmon farming sector worked with Government and researchers to introduce a vaccine for the disease Furunculosis (Aeromonas salmonicida) in 1994, which significantly reduced the need for antibiotic treatments, but the trout sector is still without an effective vaccine for this disease. Lack of data can also make it difficult for farmers to know they compare with their peers or what they need to focus on, a particular problem for the sheep and cattle sectors in the UK, which are in the process of trying to set up their own electronic medicines hub to capture data. United States In 1970 the FDA first recommended that antibiotic use in livestock be limited but set no actual regulations governing this recommendation. By 2001, the Union of Concerned Scientists estimated that more than 70% of the antibiotics consumed in the US were given to food animals (for example, chickens, pigs, and cattle), in the absence of disease. In 2004 the Government Accountability Office (GAO) heavily critiqued the FDA for not collecting enough information and data on antibiotic use in factory farms. From this, the GAO concluded the FDA did not have enough information to create effective policy changes regarding antibiotic use. In response, the FDA said more research was being conducted and voluntary efforts within the industry would solve the problem of antibiotic resistance. However, by 2011, a total of of antimicrobials were sold for use in food-producing animals in the United States, which represented 80% of all antibiotics sold or distributed in the United States. In March 2012, the United States District Court for the Southern District of New York, ruling in an action brought by the Natural Resources Defense Council and others, ordered the FDA to revoke approvals for the use of antibiotics in livestock that violated FDA regulations. On 11 April 2012 the FDA announced a voluntary program to phase out unsupervised use of drugs as feed additives and convert approved over-the-counter uses for antibiotics to prescription use only, requiring veterinarian supervision of their use and a prescription. In December 2013, the FDA announced the commencement of these steps to phase out the use of antibiotics for the purposes of promoting livestock growth. In 2015, the FDA approved a new Veterinary Feed Directive (VFD), an updated guideline giving instructions to pharmaceutical companies, veterinarians and producers about how to administer necessary drugs through the animal's feed and water. Around the same time, the FDA published a report of antibiotics sold or distributed for food-producing animals which found that between 2009 and 2013, just over 60% were "medically important" drugs also used in humans; the rest were from drug classes like ionophores, which are not used in human medicine. Following this, the FDA asked drug companies to voluntarily edit its labels to exclude growth promotion as an indication for antibiotic usage. It subsequently reports that "Under Guidance for Industry (GFI) #213, which went into effect January 1, 2017, antibiotics that are important for human medicine can no longer be used for growth promotion or feed efficiency in cows, pigs, chickens, turkeys, and other food animals." These new 2017 guidelines for instance prohibited using a drug off-label for non-therapeutic purposes, which would make using the re-labeled drug for growth enhancement illegal. In addition, some drugs were reclassified from 'Over the Counter' (OTC) to 'Veterinary Feed Directive' (VFD); VFD drugs require a veterinarian's authorization before they can be delivered in feed. As a result, the FDA reported a 33% decrease from 2016 to 2017 in domestic sales of medically important antibiotics for use in livestock. Despite this progress, the Natural Resources Defense Council (NRDC) remains concerned that sales of antibiotics to the beef and pork industries remain elevated in 2017 compared with the poultry industries, and their use could still primarily be for preventing diseases in healthy animals, which further increases the threat on antibiotic resistance. However, the FDA policy remains the same as it stated in 2013: Because of concerns about antibiotics residues getting into the milk or meat of cattle, in the United States, the government requires a withdraw period for any animal treated with antibiotics before it can be slaughtered, to allow residue to exit the animal. Some grocery stores have policies about antibiotic use in the animal whose produce they sell. In response to consumer concerns about the use of antibiotics in poultry, Perdue removed all human antibiotics from its feed in 2007 and launched the Harvestland brand, under which it sold products that met the requirements for an "antibiotic-free" label. In 2012 in the United States advocacy organization Consumers Union organized a petition asking the store Trader Joe's to discontinue the sale of meat produced with antibiotics. By 2014, Perdue had also phased out ionophores from its hatchery and began using "antibiotic free" labels on some products, and by 2015, 52% of the company's chickens were raised without the use of any type of antibiotics. The CDC and FDA do not now support the use of antibiotics for growth promotion because of evidence suggesting that antibiotics used for growth promotion purposes could lead to the development of resistant bacteria. In addition to this, The Pew Charitable Trusts has stated that "hundreds of scientific studies conducted over four decades demonstrate that feeding low doses of antibiotics to livestock breeds antibiotic-resistant superbugs that can infect people". The FDA, the U.S. Department of Agriculture and the Centers for Disease Control and Prevention have all testified before Congress that there is a definitive link between the routine, non-therapeutic use of antibiotics in food animal production and the challenge of antibiotic resistance in humans." However, the National Pork Board, a government-owned corporation of the United States, has said: "The vast majority of producers use (antibiotics) appropriately." In 2011 the National Pork Producers Council, an American trade association, also said, "Not only is there no scientific study linking antibiotic use in food animals to antibiotic resistance in humans, as the US pork industry has continually pointed out, but there isn't even adequate data to conduct a study." The statement was issued in response to a United States Government Accountability Office report that asserts: "Antibiotic use in food animals contributes to the emergence of resistant bacteria that may affect humans". It is difficult to set up a comprehensive surveillance system for measuring rates of change in antibiotic resistance. The US Government Accountability Office published a report in 2011 stating that government and commercial agencies had not been collecting sufficient data to make a decision about best practices. There is also no regulatory agency in the United States that systematically collects detailed data on antibiotic use in humans and animals, which means it is not clear which antibiotics are prescribed for which purpose and at what time. While this may be lacking at a regulatory level, the US poultry meat sector has been working on the issue of data collection itself, and has now reported comparative data showing significant reductions in antibiotic use. Among the highlights in the report was a 95% decrease in in-feed tetracycline use in broiler chicks from 2013 to 2017, a 67% reduction in in-feed use of tetracycline in turkeys, and a 42% drop in hatchery use of gentamicin in turkey poults. This is an encouraging sign; the 53% overall reduction in antibiotic use seen in the UK between 2013 and 2018 was initiated from a voluntary stewardship programme developed by the UK poultry meat sector. Research into alternatives Increasing concern due to the emergence of antibiotic-resistant bacteria has led researchers to look for alternatives to using antibiotics in livestock. Probiotics, cultures of a single bacteria strain or mixture of different strains, are being studied in livestock as a production enhancer. Prebiotics are non-digestible carbohydrates. The carbohydrates are mainly made up of oligosaccharides which are short chains of monosaccharides. The two most commonly studied prebiotics are fructooligosaccharides (FOS) and mannanoligosaccharides (MOS). FOS has been studied for use in chicken feed. MOS works as a competitive binding site, as bacteria bind to it rather than the intestine and are carried out. Bacteriophages are able to infect most bacteria and are easily found in most environments colonized by bacteria, and have been studied as well. In another study it was found that using probiotics, competitive exclusion, enzymes, immunomodulators and organic acids prevents the spread of bacteria and can all be used in place of antibiotics. Another research team was able to use bacteriocins, antimicrobial peptides and bacteriophages in the control of bacterial infections. While further research is needed in this field, alternative methods have been identified in effectively controlling bacterial infections in animals. Other alternatives include preventative approaches to keep the animals healthier and so reduce the need for antibiotics. These include improving the living conditions for animals, stimulating natural immunity through better nutrition, increasing biosecurity, implementing better management and hygiene practices, and ensuring better use of vaccination.
Technology
Animal husbandry
null
23434533
https://en.wikipedia.org/wiki/Filter%20%28signal%20processing%29
Filter (signal processing)
In signal processing, a filter is a device or process that removes some unwanted components or features from a signal. Filtering is a class of signal processing, the defining feature of filters being the complete or partial suppression of some aspect of the signal. Most often, this means removing some frequencies or frequency bands. However, filters do not exclusively act in the frequency domain; especially in the field of image processing many other targets for filtering exist. Correlations can be removed for certain frequency components and not for others without having to act in the frequency domain. Filters are widely used in electronics and telecommunication, in radio, television, audio recording, radar, control systems, music synthesis, image processing, computer graphics, and structural dynamics. There are many different bases of classifying filters and these overlap in many different ways; there is no simple hierarchical classification. Filters may be: non-linear or linear time-variant or time-invariant, also known as shift invariance. If the filter operates in a spatial domain then the characterization is space invariance. causal or non-causal: A filter is non-causal if its present output depends on future input. Filters processing time-domain signals in real time must be causal, but not filters acting on spatial domain signals or deferred-time processing of time-domain signals. analog or digital discrete-time (sampled) or continuous-time passive or active type of continuous-time filter infinite impulse response (IIR) or finite impulse response (FIR) type of discrete-time or digital filter. Linear continuous-time filters Linear continuous-time circuit is perhaps the most common meaning for filter in the signal processing world, and simply "filter" is often taken to be synonymous. These circuits are generally designed to remove certain frequencies and allow others to pass. Circuits that perform this function are generally linear in their response, or at least approximately so. Any nonlinearity would potentially result in the output signal containing frequency components not present in the input signal. The modern design methodology for linear continuous-time filters is called network synthesis. Some important filter families designed in this way are: Chebyshev filter, has the best approximation to the ideal response of any filter for a specified order and ripple. Butterworth filter, has a maximally flat frequency response. Bessel filter, has a maximally flat phase delay. Elliptic filter, has the steepest cutoff of any filter for a specified order and ripple. The difference between these filter families is that they all use a different polynomial function to approximate to the ideal filter response. This results in each having a different transfer function. Another older, less-used methodology is the image parameter method. Filters designed by this methodology are archaically called "wave filters". Some important filters designed by this method are: Constant k filter, the original and simplest form of wave filter. m-derived filter, a modification of the constant k with improved cutoff steepness and impedance matching. Terminology Some terms used to describe and classify linear filters: The frequency response can be classified into a number of different bandforms describing which frequency bands the filter passes (the passband) and which it rejects (the stopband): Low-pass filter – low frequencies are passed, high frequencies are attenuated. High-pass filter – high frequencies are passed, low frequencies are attenuated. Band-pass filter – only frequencies in a frequency band are passed. Band-stop filter or band-reject filter – only frequencies in a frequency band are attenuated. Notch filter – rejects just one specific frequency - an extreme band-stop filter. Comb filter – has multiple regularly spaced narrow passbands giving the bandform the appearance of a comb. All-pass filter – all frequencies are passed, but the phase of the output is modified. Cutoff frequency is the frequency beyond which the filter will not pass signals. It is usually measured at a specific attenuation such as 3 dB. Roll-off is the rate at which attenuation increases beyond the cut-off frequency. Transition band, the (usually narrow) band of frequencies between a passband and stopband. Ripple is the variation of the filter's insertion loss in the passband. The order of a filter is the degree of the approximating polynomial and in passive filters corresponds to the number of elements required to build it. Increasing order increases roll-off and brings the filter closer to the ideal response. One important application of filters is in telecommunication. Many telecommunication systems use frequency-division multiplexing, where the system designers divide a wide frequency band into many narrower frequency bands called "slots" or "channels", and each stream of information is allocated one of those channels. The people who design the filters at each transmitter and each receiver try to balance passing the desired signal through as accurately as possible, keeping interference to and from other cooperating transmitters and noise sources outside the system as low as possible, at reasonable cost. Multilevel and multiphase digital modulation systems require filters that have flat phase delay—are linear phase in the passband—to preserve pulse integrity in the time domain, giving less intersymbol interference than other kinds of filters. On the other hand, analog audio systems using analog transmission can tolerate much larger ripples in phase delay, and so designers of such systems often deliberately sacrifice linear phase to get filters that are better in other ways—better stop-band rejection, lower passband amplitude ripple, lower cost, etc. Technologies Filters can be built in a number of different technologies. The same transfer function can be realised in several different ways, that is the mathematical properties of the filter are the same but the physical properties are quite different. Often the components in different technologies are directly analogous to each other and fulfill the same role in their respective filters. For instance, the resistors, inductors and capacitors of electronics correspond respectively to dampers, masses and springs in mechanics. Likewise, there are corresponding components in distributed-element filters. Electronic filters were originally entirely passive consisting of resistance, inductance and capacitance. Active technology makes design easier and opens up new possibilities in filter specifications. Digital filters operate on signals represented in digital form. The essence of a digital filter is that it directly implements a mathematical algorithm, corresponding to the desired filter transfer function, in its programming or microcode. Mechanical filters are built out of mechanical components. In the vast majority of cases they are used to process an electronic signal and transducers are provided to convert this to and from a mechanical vibration. However, examples do exist of filters that have been designed for operation entirely in the mechanical domain. Distributed-element filters are constructed out of components made from small pieces of transmission line or other distributed elements. There are structures in distributed-element filters that directly correspond to the lumped elements of electronic filters, and others that are unique to this class of technology. Waveguide filters consist of waveguide components or components inserted in the waveguide. Waveguides are a class of transmission line and many structures of distributed-element filters, for instance the stub, can also be implemented in waveguides. Optical filters were originally developed for purposes other than signal processing such as lighting and photography. With the rise of optical fiber technology, however, optical filters increasingly find signal processing applications and signal processing filter terminology, such as longpass and shortpass, are entering the field. Transversal filter, or delay line filter, works by summing copies of the input after various time delays. This can be implemented with various technologies including analog delay lines, active circuitry, CCD delay lines, or entirely in the digital domain. Digital filters Digital signal processing allows the inexpensive construction of a wide variety of filters. The signal is sampled and an analog-to-digital converter turns the signal into a stream of numbers. A computer program running on a CPU or a specialized DSP (or less often running on a hardware implementation of the algorithm) calculates an output number stream. This output can be converted to a signal by passing it through a digital-to-analog converter. There are problems with noise introduced by the conversions, but these can be controlled and limited for many useful filters. Due to the sampling involved, the input signal must be of limited frequency content or aliasing will occur. Quartz filters and piezoelectrics In the late 1930s, engineers realized that small mechanical systems made of rigid materials such as quartz would acoustically resonate at radio frequencies, i.e. from audible frequencies (sound) up to several hundred megahertz. Some early resonators were made of steel, but quartz quickly became favored. The biggest advantage of quartz is that it is piezoelectric. This means that quartz resonators can directly convert their own mechanical motion into electrical signals. Quartz also has a very low coefficient of thermal expansion which means that quartz resonators can produce stable frequencies over a wide temperature range. Quartz crystal filters have much higher quality factors than LCR filters. When higher stabilities are required, the crystals and their driving circuits may be mounted in a "crystal oven" to control the temperature. For very narrow band filters, sometimes several crystals are operated in series. A large number of crystals can be collapsed into a single component, by mounting comb-shaped evaporations of metal on a quartz crystal. In this scheme, a "tapped delay line" reinforces the desired frequencies as the sound waves flow across the surface of the quartz crystal. The tapped delay line has become a general scheme of making high-Q filters in many different ways. SAW filters SAW (surface acoustic wave) filters are electromechanical devices commonly used in radio frequency applications. Electrical signals are converted to a mechanical wave in a device constructed of a piezoelectric crystal or ceramic; this wave is delayed as it propagates across the device, before being converted back to an electrical signal by further electrodes. The delayed outputs are recombined to produce a direct analog implementation of a finite impulse response filter. This hybrid filtering technique is also found in an analog sampled filter. SAW filters are limited to frequencies up to 3 GHz. The filters were developed by Professor Ted Paige and others. BAW filters BAW (bulk acoustic wave) filters are electromechanical devices. BAW filters can implement ladder or lattice filters. BAW filters typically operate at frequencies from around 2 to around 16 GHz, and may be smaller or thinner than equivalent SAW filters. Two main variants of BAW filters are making their way into devices: thin-film bulk acoustic resonator or FBAR and solid mounted bulk acoustic resonators (SMRs). Garnet filters Another method of filtering, at microwave frequencies from 800 MHz to about 5 GHz, is to use a synthetic single crystal yttrium iron garnet sphere made of a chemical combination of yttrium and iron (YIGF, or yttrium iron garnet filter). The garnet sits on a strip of metal driven by a transistor, and a small loop antenna touches the top of the sphere. An electromagnet changes the frequency that the garnet will pass. The advantage of this method is that the garnet can be tuned over a very wide frequency by varying the strength of the magnetic field. Atomic filters For even higher frequencies and greater precision, the vibrations of atoms must be used. Atomic clocks use caesium masers as ultra-high Q filters to stabilize their primary oscillators. Another method, used at high, fixed frequencies with very weak radio signals, is to use a ruby maser tapped delay line. The transfer function The transfer function of a filter is most often defined in the domain of the complex frequencies. The back and forth passage to/from this domain is operated by the Laplace transform and its inverse (therefore, here below, the term "input signal" shall be understood as "the Laplace transform of" the time representation of the input signal, and so on). The transfer function of a filter is the ratio of the output signal to the input signal as a function of the complex frequency : with . For filters that are constructed of discrete components (lumped elements): Their transfer function will be the ratio of polynomials in , i.e. a rational function of . The order of the transfer function will be the highest power of encountered in either the numerator or the denominator polynomial. The polynomials of the transfer function will all have real coefficients. Therefore, the poles and zeroes of the transfer function will either be real or occur in complex-conjugate pairs. Since the filters are assumed to be stable, the real part of all poles (i.e. zeroes of the denominator) will be negative, i.e. they will lie in the left half-plane in complex frequency space. Distributed-element filters do not, in general, have rational-function transfer functions, but can approximate them. The construction of a transfer function involves the Laplace transform, and therefore it is needed to assume null initial conditions, because And when f(0) = 0 we can get rid of the constants and use the usual expression An alternative to transfer functions is to give the behavior of the filter as a convolution of the time-domain input with the filter's impulse response. The convolution theorem, which holds for Laplace transforms, guarantees equivalence with transfer functions. Classification Certain filters may be specified by family and bandform. A filter's family is specified by the approximating polynomial used, and each leads to certain characteristics of the transfer function of the filter. Some common filter families and their particular characteristics are: Butterworth filter – no gain ripple in pass band and stop band, slow cutoff Chebyshev filter (Type I) – no gain ripple in stop band, moderate cutoff Chebyshev filter (Type II) – no gain ripple in pass band, moderate cutoff Bessel filter – no group delay ripple, no gain ripple in both bands, slow gain cutoff Elliptic filter – gain ripple in pass and stop band, fast cutoff Optimum "L" filter Gaussian filter – no ripple in response to step function Raised-cosine filter Each family of filters can be specified to a particular order. The higher the order, the more the filter will approach the "ideal" filter; but also the longer the impulse response is and the longer the latency will be. An ideal filter has full transmission in the pass band, complete attenuation in the stop band, and an abrupt transition between the two bands, but this filter has infinite order (i.e., the response cannot be expressed as a linear differential equation with a finite sum) and infinite latency (i.e., its compact support in the Fourier transform forces its time response to be ever lasting). Here is an image comparing Butterworth, Chebyshev, and elliptic filters. The filters in this illustration are all fifth-order low-pass filters. The particular implementation – analog or digital, passive or active – makes no difference; their output would be the same. As is clear from the image, elliptic filters are sharper than the others, but they show ripples on the whole bandwidth. Any family can be used to implement a particular bandform of which frequencies are transmitted, and which, outside the passband, are more or less attenuated. The transfer function completely specifies the behavior of a linear filter, but not the particular technology used to implement it. In other words, there are a number of different ways of achieving a particular transfer function when designing a circuit. A particular bandform of filter can be obtained by transformation of a prototype filter of that family. Impedance matching Impedance matching structures invariably take on the form of a filter, that is, a network of non-dissipative elements. For instance, in a passive electronics implementation, it would likely take the form of a ladder topology of inductors and capacitors. The design of matching networks shares much in common with filters and the design invariably will have a filtering action as an incidental consequence. Although the prime purpose of a matching network is not to filter, it is often the case that both functions are combined in the same circuit. The need for impedance matching does not arise while signals are in the digital domain. Similar comments can be made regarding power dividers and directional couplers. When implemented in a distributed-element format, these devices can take the form of a distributed-element filter. There are four ports to be matched and widening the bandwidth requires filter-like structures to achieve this. The inverse is also true: distributed-element filters can take the form of coupled lines. Some filters for specific purposes Audio filter Line filter Scaled correlation, high-pass filter for correlations Texture filtering Filters for removing noise from data Wiener filter Kalman filter Savitzky–Golay smoothing filter
Technology
Components
null
39065406
https://en.wikipedia.org/wiki/High-velocity%20cloud
High-velocity cloud
High-velocity clouds (HVCs) are large accumulations of gas with an unusually rapid motion relative to their surroundings. They can be found throughout the galactic halo of the Milky Way. Their bulk motions in the local standard of rest have velocities which are measured in excess of 70–90 km s−1. These clouds of gas can be massive in size, some on the order of millions of times the mass of the Sun (), and cover large portions of the sky. They have been observed in the Milky Way's halo and within other nearby galaxies. HVCs are important to the understanding of galactic evolution because they account for a large amount of baryonic matter in the galactic halo. In addition, as these clouds fall into the disk of the galaxy, they add material that can form stars in addition to the dilute star forming material already present in the disk. This new material aids in maintaining the star formation rate (SFR) of the galaxy. The origins of the HVCs are still in question. No one theory explains all of the HVCs in the galaxy. However, it is known that some HVCs are probably spawned by interactions between the Milky Way and satellite galaxies, such as the Large and Small Magellanic Clouds (LMC and SMC, respectively) which produce a well-known complex of HVCs called the Magellanic Stream. Because of the various possible mechanisms that could potentially produce HVCs, there are still many questions surrounding HVCs for researchers to study. Observational history In the mid-1950s, dense pockets of gas were first discovered outside of the galactic plane. This was quite notable because the models of the Milky Way showed the density of gas decreasing with distance from the galactic plane, rendering this a striking exception. According to the prevailing galactic models, the dense pockets should have dissipated long ago, making their very existence in the halo quite puzzling. In 1956 the solution was proposed that the dense pockets were stabilized by a hot, gaseous corona that surrounds the Milky Way. Inspired by this proposal, Jan Oort, of Leiden University, Netherlands, proposed that cold gas clouds might be found in the galactic halo, far away from the galactic plane. They were soon located, in 1963, via their neutral hydrogen radio emission. They were traveling toward the galactic disk at a very high velocity relative to other entities in the galactic disk. The first two clouds that were located were named Complex A and Complex C. Due to their anomalous velocities, these objects were dubbed "high-velocity clouds", distinguishing them from both gas at normal local standard of rest velocities as well as their slower-moving counterparts known as intermediate-velocity clouds (IVCs). Several astronomers proposed hypotheses (which later proved to be inaccurate) regarding the nature of HVCs, but their models were further complicated in the early 1970s by the discovery of the Magellanic Stream, which behaves like a string of HVCs. In 1988, a northern-sky survey of neutral hydrogen radio emissions was completed using the Dwingeloo radio telescope in the Netherlands. From this survey, astronomers were able to detect more HVCs. In 1997, a map of the Milky Way's neutral hydrogen was largely complete, again allowing astronomers to detect more HVCs. In the late 1990s, using data from the La Palma Observatory in the Canary Islands, the Hubble Space Telescope, and, later, the Far Ultraviolet Spectroscopic Explorer (FUSE), the distance to an HVC was gauged for the first time. Around the same time, the chemical composition of HVCs was first measured. Additionally, in 2000, a southern hemisphere survey of neutral hydrogen radio emissions was completed using the Villa Elisa radio telescope in Argentina from which yet more HVCs were discovered. Later observations of Complex C showed that the cloud, originally thought to be deficient in heavy elements (also known as low metallicity), contains some sections with a higher metallicity compared to the bulk of the cloud, indicating that it has begun to mix with other gas in the halo. Using observations of highly ionized oxygen and other ions astronomers were able to show that hot gas in Complex C is an interface between hot and cold gas. Characteristics Multi-phase structure HVCs are typically the coldest and densest components of the galactic halo. However, the halo itself also has a multi-phase structure: cold and dense neutral hydrogen at temperatures less than 104 K, warm and warm-hot gas at temperatures between 104 K and 106 K, and hot ionized gas at temperatures greater than 106 K. As a result, cool clouds moving through the diffuse halo medium have a chance to become ionized by the warmer and hotter gas. This can create a pocket of ionized gas that surrounds a neutral interior in an HVC. Evidence of this cool-hot gas interaction in the halo comes from the observation of OVI absorption. Distance HVCs are defined by their respective velocities, but distance measurements allow for estimates on their size, mass, volume density, and even pressure. In the Milky Way, clouds are typically located between 2–15 kpc (6.52x103 ly–4.89x104 ly) from the centre, and at z-heights (distances above or below the Galactic plane) within 10 kpc (3.26x104 ly). The Magellanic Stream and the Leading Arm are at ~55 kpc (1.79x105 ly), near the Magellanic Clouds, and may extend to about 100–150 kpc (3.26x105 ly–4.89x105 ly). There are two methods of distance determination for HVCs. Direct-distance constraint The best method for determining the distance to an HVC involves using a halo star of known distance as a standard for comparison. We can extract information about the distance by studying the star's spectrum. If a cloud is located in front of the halo star, absorption lines will be present, whereas if the cloud is behind the star, no absorption lines should be present. CaII, H, K, and/or NaII are the double absorption lines that are used in this technique. Halo stars that have been identified through the Sloan Digital Sky Survey have led to distance measurements for almost all of the large complexes currently known. Indirect-distance constraint The indirect-distance-constraint methods are usually dependent on theoretical models, and assumptions must be made in order for them to work. One indirect method involves Hα observations, where an assumption is made that the emission lines come from ionizing radiation from the galaxy, reaching the cloud's surface. Another method uses deep HI observations in the Milky Way and/or Local Group with the assumption that the distribution of HVCs in the Local Group is similar to that of the Milky Way. These observations put the clouds within 80 kpc (2.61x105 ly) of the galaxy, and observations of the Andromeda Galaxy put them at approximately 50 kpc (1.63x105 ly). For those HVCs where both are available, distances measured via Hα emission tend to agree with those found via direct distances measurements. Spectral features HVCs are typically detected at the radio and optical wavelengths, and for hotter HVCs, ultraviolet and/or X-ray observations are needed. Neutral hydrogen clouds are detected via the 21 cm emission line. Observations have shown that HVCs can have ionized exteriors due to external radiation or the motion of the HVC through a diffuse halo medium. These ionized components can be detected through Hα emission lines and even absorption lines in the ultraviolet. The warm-hot gas in HVCs exhibit OVI, SiIV, and CIV absorption lines. Temperature Most HVCs show spectral line widths that are indicative of a warm, neutral medium for HVCs at about 9000 Kelvin. However, many HVCs have line widths which indicate that they are also partly composed of cool gas at less than 500 K. Mass Estimates on the peak column density of HVCs (1019 cm−2) and typical distances (1–15 kpc) yield a mass estimate of HVCs in the Milky Way in the range of 7.4x107 . If the Large Magellanic Cloud and the Small Magellanic Cloud are included, the total mass would increase by another 7x108 . Size Observed angular sizes for HVCs range from 103 degrees2 down to the resolution limit of the observations. Typically, high resolution observations eventually show that larger HVCs are often composed of many smaller complexes. When detecting HVCs solely via HI emission, all of the HVCs in the Milky Way cover about 37% of the night sky. Most HVCs are somewhere between 2 and 15 kilo parsecs (kpc) across. Lifetimes Cold clouds moving through a diffuse halo medium are estimated to have a survival time on the order of a couple hundred million years without some sort of support mechanism that prevents them from dissipating. The lifetime mainly depends on the mass of the cloud, but also on the cloud density, halo density, and velocity of the cloud. HVCs in the galactic halo are destroyed through what is called the Kelvin-Helmholtz instability. The infall of clouds can dissipate energy leading to the inevitable heating of the halo medium. The multi-phase structure of the gaseous halo suggests that there is an ongoing life-cycle of HVC destruction and cooling. Possible support mechanisms Some possible mechanisms responsible for increasing the lifetime of an HVC include the presence of a magnetic field that induces a shielding effect and/or the presence of dark matter; however, there is no strong observational evidence for dark matter in HVCs. The most accepted mechanism is that of dynamical shielding, which increases the Kelvin-Helmholtz time. This process works due to the HVC having a cold neutral interior shielded by a warmer and lower-density exterior, causing the HI clouds to have smaller relative velocities with respect to their surroundings. Origins Since their discovery, several possible models have been proposed to explain the origins of HVCs. However, for observations in the Milky Way, the multiplicity of clouds, the distinct characteristics of IVCs, and the existence of clouds that are clearly associated with cannibalized dwarf galaxies (i.e. the Magellanic system among others) indicate that the HVCs most likely have several possible origins. This conclusion is also strongly supported by the fact that most simulations for any given model can account for some cloud behaviors, but not all. Oort's hypothesis Jan Oort developed a model to explain HVCs as gas left over from the early formation of the galaxy. He theorized that if this gas were at the edge of the galaxy's gravitational influence, over billions of years it could be dragged back toward the Galactic disk and fall back in as HVCs. Oort's model explained the observed chemical composition of the galaxy well. Given an isolated galaxy (i.e. one without ongoing assimilation of hydrogen gas), successive generations of stars should infuse the Interstellar Medium (ISM) with higher abundances of heavy elements. However, examinations of stars in the solar neighborhood show roughly the same relative abundances of the same elements regardless of the age of the star; this has come to be known as the G dwarf problem. HVCs may explain these observations by representing a portion of the primordial gas responsible for continuously diluting the ISM. Galactic fountain An alternative theory centers on gas being ejected out of the galaxy and falling back in as the high-velocity gas we observe. Several proposed mechanisms exist to explain how material can be ejected from the Galactic disk, but the most prevalent explanation of the Galactic Fountain centers on compounding supernova explosions to eject large "bubbles" of material. Since gas is being ejected from the disk of the galaxy, the observed metallicity of the ejected gas should be similar to that of the disk. While this may be ruled out for the source of HVCs, these conclusions may point to the Galactic Fountain as the source of IVCs. Accretion from satellite galaxies As dwarf galaxies pass through a larger galaxy's halo, the gas that exists as the interstellar medium of the dwarf galaxy may be stripped away by tidal forces and ram pressure stripping. Evidence for this model of HVC formation comes from observations of the Magellanic Stream in the halo of the Milky Way. The somewhat distinct features of HVCs formed in this way are also accounted for by simulations, and most HVCs in the Milky Way which are not associated with the Magellanic Stream do not seem to be at all associated with a dwarf galaxy. Dark matter Another model, proposed by David Eichler, now at Ben Gurion University, and later by Leo Blitz of the University of California at Berkeley, assumes the clouds are very massive, located between galaxies, and created when baryonic material pools near concentrations of dark matter. The gravitational attraction between the dark matter and the gas was intended to explain the ability of the clouds to remain stable even at intergalactic distances where the paucity of ambient material should cause the clouds to dissipate rather quickly. However, with the advent of distance determinations for most HVCs, this possibility may be ruled out. Galactic evolution To inquire into the origin and fate of a galaxy's halo gas is to inquire into the evolution of said galaxy. HVCs and IVCs are significant features of a spiral galaxy's structure. These clouds are of primary importance when considering a galaxy's Star formation rate (SFR). The Milky Way has approximately 5 billion solar masses of star forming material within its disk and a SFR of 1–3 yr−1. Models for galactic chemical evolution find that at least half of this amount must be continuously accreted, low-metallicity material to describe the current, observable structure. Without this accretion, the SFRs indicate that the current star formation material will only last for another few gigayears (Gyr) at most. Models of mass inflow place a maximal accretion rate of .4 yr−1 from HVCs. This rate does not meet that which is demanded by the chemical evolutionary models. Thus, it is a possibility that the Milky Way may go through a low point in gas content and/or decrease its SFR until further gas arrives. Consequently, when discussing HVCs in the context of galactic evolution, the conversation is largely concerned with star formation and how the future star material fuels the galactic disk. The current model for the universe, ɅCDM, indicates that galaxies tend to cluster and achieve a web-like structure over time. Under such models, the large majority of baryons entering a galactic halo do so along these cosmic filaments. 70% of the mass inflow at the virial radius is consistent with coming in along cosmic filaments in evolutionary models of the Milky Way. Given current observational limitations, the majority of the filaments feeding into the Milky Way are not visible in HI. Despite this, some gas clouds within the Galaxy's halo have lower metallicities than that of gas stripped from satellites, suggesting that the clouds are primordial material probably flowing in along the cosmic filaments. Gas of this type, detectable out to ~160,000 ly (50 kpc), largely becomes part of the hot halo, cools and condenses, and falls into the Galactic disk to serve in star formation. Mechanical feedback mechanisms, supernova-driven or active galactic nuclei-driven outflows of gas, are also key elements in understanding the origin of a spiral galaxy's halo gas and the HVCs within. X-ray and gamma-ray observations in the Milky Way indicate the likelihood of some central engine feedback having occurred in the past 10–15 megayears (Myr). Furthermore, as described in “origins,” the disk-wide “galactic fountain” phenomenon is similarly crucial in piecing together the Milky Way's evolution. Materials ejected in the course of a galaxy's lifetime help describe observational data (observed metallicity content primarily) while providing feedback sources for future star formation. Likewise detailed in the "origins" section, satellite accretion plays a role in the evolution of a galaxy. Most galaxies are assumed to result from smaller precursors merging, and the process continues throughout a galaxy's lifetime. Within the next 10 billion years, further satellite galaxies will merge with Milky Way, sure to significantly impact the Milky Way's structure and steer its future evolution. Spiral galaxies have abundant sources for potential star-formation material, but how long galaxies are able to continuously draw on these resources remains in question. A future generation of observational tools and computational abilities will shed light on some of the technical details of the Milky Way's past and future as well as how HVCs play a role in its evolution. Examples of HVCs Northern Hemisphere In the Northern Hemisphere, we find several large HVCs, though nothing on the order of the Magellanic system (discussed below). Complexes A and C were the first HVCs discovered and were first observed in 1963. Both of these clouds have been found to be deficient in heavy elements, showing a concentration that is 10–30% that of the Sun's. Their low metallicity seems to serve as proof that HVCs do indeed bring in “fresh” gas. Complex C has been estimated to bring in 0.1–0.2 of new material every year, whereas Complex A brings in about half that amount. This fresh gas is about 10–20% of the total needed to properly dilute Galactic gas enough to account for the chemical composition of stars. Complex C Complex C, one of the most well-studied HVCs, is at least 14,000 ly (about 4 kpc) distant but no more than 45,000 ly (about 14 kpc) above the Galactic plane. It should also be noted that Complex C has been observed to have about 1/50 of the nitrogen content that the Sun contains. Observations of high-mass stars indicate that they produce less nitrogen, as compared to other heavy elements, than do low-mass stars. This implies that the heavy elements in Complex C may come from high-mass stars. The earliest stars are known to have been higher-mass stars and so Complex C appears to be a fossil of sorts, formed outside the galaxy and made up of gas from the ancient universe. However, a more recent study of another area of Complex C has found a metallicity twice as high as what was reported originally. These measurements have led scientists to believe that Complex C has begun to mix with other, younger, nearby gas clouds. Complex A Complex A is located 25,000–30,000 ly (8–9 kpc) away in the galactic halo. Southern Hemisphere In the Southern Hemisphere, the most prominent HVCs are all associated with the Magellanic system which has two major components, the Magellanic Stream and the Leading Arm. They are both made of gas that was stripped from the Large and Small Magellanic Clouds (LMC and SMC). Half of the gas was decelerated and now lags behind the clouds in their orbits (this is the stream component). The other half of the gas (the leading arm component) was accelerated and pulled out in front of the galaxies in their orbit. The Magellanic system is about 180,000 ly (55 kpc) from the Galactic disk, though the tip of the Magellanic Stream may extend out as far as 300,000–500,000 ly (100–150 kpc). The entire system is thought to contribute at least 3x108 of HI to the Galactic halo, about 30–50% of the HI mass of the Milky Way. Magellanic Stream The Magellanic Stream is seen as a “long, continuous structure with a well-defined velocity and column density gradient.” The velocity at the tip of the Magellanic Stream is hypothesized to be +300 km/s in the Galactic-standard-of-rest (GSR) frame. Stream clouds are thought to have a lower pressure than other HVCs because they reside in an area where the Galactic halo medium is more distant and has a much lower density. FUSE found highly ionized oxygen mixed in with the Magellanic Stream. This suggests that the stream must be embedded in hot gas. Leading Arm The Leading Arm is not one continuous stream, but rather an association of multiple clouds found in the region preceding the Magellanic Clouds. It is thought to have a velocity of −300 km/s in the GSR frame. One of the HVCs in the Leading Arm shows a composition very similar to the SMC. This seems to support the idea that the gas that comprises it was pulled off of the galaxy and accelerated in front of it via tidal forces which pull apart satellite galaxies and assimilate them into the Milky Way. Smith's Cloud Smith's Cloud is another well-studied HVC found in the Southern Hemisphere, located in the constellation Aquila.
Physical sciences
Basics_2
Astronomy
39067260
https://en.wikipedia.org/wiki/Marine%20weather%20forecasting
Marine weather forecasting
Marine weather forecasting is the process by which mariners and meteorological organizations attempt to forecast future weather conditions over the Earth's oceans. Mariners have had rules of thumb regarding the navigation around tropical cyclones for many years, dividing a storm into halves and sailing through the normally weaker and more navigable half of their circulation. Marine weather forecasts by various weather organizations can be traced back to the sinking of the Royal Charter in 1859 and the RMS Titanic in 1912. The wind is the driving force of weather at sea, as wind generates local wind waves, long ocean swells, and its flow around the subtropical ridge helps maintain warm water currents such as the Gulf Stream. The importance of weather over the ocean during World War II led to delayed or secret weather reports, in order to maintain a competitive advantage. Weather ships were established by various nations during World War II for forecasting purposes, and were maintained through 1985 to help with transoceanic plane navigation. Voluntary observations from ships, weather buoys, weather satellites, and numerical weather prediction have been used to diagnose and help forecast weather over the Earth's ocean areas. Since the 1960s, numerical weather prediction's role over the Earth's seas has taken a greater role in the forecast process. Weather elements such as sea state, surface winds, tide levels, and sea surface temperature are tackled by organizations tasked with forecasting weather over open oceans and seas. Currently, the Japan Meteorological Agency, the United States National Weather Service, and the United Kingdom Met Office create marine weather forecasts for the Northern Hemisphere. History There are various origins for government-issued marine weather forecasts, generally following maritime disasters. Great Britain In October 1859, the steam clipper Royal Charter was wrecked in a strong storm off Anglesey; 450 people lost their lives. Due to this loss, Vice-Admiral Robert FitzRoy introduced a warning service for shipping in February 1861, using telegraph communications. This remained the United Kingdom Met Office's primary responsibility for some time afterwards. In 1911, the Met Office had begun issuing marine weather forecasts which included gale and storm warnings via radio transmission for areas around Great Britain. This service was discontinued during and following World War I, between 1914 and June 1921, and again during World War II between 1939 and 1945. United States The first attempt as a marine weather program within the United States was initiated in New Orleans, Louisiana by the United States Army Signal Corps. A January 23, 1873 memo directed the New Orleans Signal Observer to transcribe meteorological data from the ship logs of those arriving in port. Marine forecasting responsibility transferred from the United States Navy to the Weather Bureau in 1904, which enabled the receipt of timely observations from ships at sea. The sinking of RMS Titanic in 1912 played a pivotal role in marine weather forecasting globally. In response to that tragedy, an international commission was formed to determine requirements for safer ocean voyages. In 1914, the commission's work resulted in the International Convention for the Safety of Life at Sea. In 1957, in order to help address marine issues, the United States Weather Bureau started to publish the Mariners Weather Log bi-monthly publication to report past weather conditions primarily over Northern Hemisphere oceans, information regarding the globe's tropical cyclone seasons, to publish monthly climatologies for use of those at sea, and to encourage voluntary ship observations from vessels at sea. Within the United States National Weather Service (NWS), forecast weather maps began to be published by offices in New York City, San Francisco, and Honolulu for public use. North Atlantic forecasts were shifted from a closed United States Navy endeavor to a National Weather Service product suite via radiofacsimile in 1971, while northeast Pacific forecasts became publicly available by the same method in 1972. Between 1986 and 1989, the portion of the National Meteorological Center (NMC) known as the Ocean Products Center (OPC) was responsible for marine weather forecasting within the NWS. Between August 1989 and 1995, the unit named the Marine Forecast Branch also was involved in providing objective analysis and forecast products for marine and oceanographic variables. The Marine Prediction Center, later renamed the Ocean Prediction Center, assumed the U.S. obligation to issue warnings and forecasts for portions of the North Atlantic and North Pacific oceans once it was created in 1995. Importance of the wind Development of warm ocean currents The trade winds blow westward in the tropics, and the westerlies blow eastward at mid-latitudes. This wind pattern applies a stress to the subtropical ocean surface with negative curl across the north Atlantic Ocean. The resulting Sverdrup transport is equatorward. Because of conservation of potential vorticity caused by the poleward-moving winds on the subtropical ridge's western periphery and the increased relative vorticity of northward moving water, transport is balanced by a narrow, accelerating poleward current, which flows along the western boundary of the ocean basin, outweighing the effects of friction with the western boundary current known as the Labrador current. The conservation of potential vorticity also causes bends along the Gulf Stream, which occasionally break off due to a shift in the Gulf Stream's position, forming separate warm and cold eddies. This overall process, known as western intensification, causes currents on the western boundary of an ocean basin, such as the Gulf Stream, to be stronger than those on the eastern boundary. Swell dispersion and wave groups Swells are often created by storms long distances away from the beach where they break, and the propagation of the longest swells is only limited by shorelines. For example, swells generated in the Indian Ocean have been recorded in California after more than half a round-the-world trip. This distance allows the waves comprising the swells to be better sorted and free of chop as they travel toward the coast. Waves generated by storm winds have the same speed and will group together and travel with each other, while others moving at even a fraction of a metre per second slower will lag behind, ultimately arriving many hours later due to the distance covered. The time of propagation from the source t is proportional to the distance X divided by the wave period T. In deep water it is where g is the acceleration of gravity. As an example, for a storm located away, swells with a period T=15 s will arrive 10 days after the storm, followed by 14 s swells another 17 hours later. This dispersive arrivals of swells, long periods first with a reduction in the peak wave period over time, can be used to tell the distance at which swells were generated. Whereas the sea state in the storm has a frequency spectrum with more or less always the same shape (i.e. a well defined peak with dominant frequencies within plus or minus 7% of the peak), the swell spectra are more and more narrow, sometimes as 2% or less, as waves disperse further and further away. The result is that wave groups (called sets by surfers) can have a large number of waves. From about seven waves per group in the storm, this rises to 20 and more in swells from very distant storms. Sailing ship journeys Ocean journeys by sailing ship can take many months, and a common hazard is becoming becalmed because of lack of wind, or being blown off course by severe storms or winds that do not allow progress in the desired direction. A severe storm could lead to shipwreck, and the loss of all hands. Sailing ships can only carry a certain quantity of supplies in their hold, so they have to plan long voyages carefully to include appropriate provisions, including fresh water. Tropical cyclone avoidance Mariners have a way to safely navigate around tropical cyclones. They split tropical cyclones in two, based on their direction of motion, and maneuver to avoid the right segment of the cyclone in the Northern Hemisphere (the left in the Southern Hemisphere). Sailors term the right side the dangerous semicircle since the heaviest rain and strongest winds and seas were located in this half of the storm, as the cyclone's translation speed and its rotational wind are additive. The other half of the tropical cyclone is called the navigable semicircle since weather conditions are lessened (subtractive) in this portion of the storm. The rules of thumb for ship travel when a tropical cyclone is in their vicinity are to avoid them if at all possible and do not cross their forecast path (crossing the T). Those traveling through the dangerous semicircle are advised to keep to the true wind on the starboard bow and make as much headway as possible. Ships moving through the navigable semicircle are advised to keep the true wind on the starboard quarter while making as much headway as possible. The 1-2-3 rule (mariners' 1-2-3 rule or danger area) is a guideline commonly taught to mariners for severe storm (specifically hurricane and tropical storm) tracking and prediction. It refers to the rounded long-term National Hurricane Center forecast errors of 100-200-300 nautical miles at 24-48-72 hours, respectively. However, these errors have decreased to near 50-100-150 as NHC forecasters become more accurate with tropical cyclone track forecasting. The "danger area" to be avoided is constructed by expanding the forecast path by a radius equal to the respective hundreds of miles plus the forecast wind radii (size of the storm at those hours). Within numerical weather prediction Ocean surface modeling The transfer of energy between the wind blowing over the surface of an ocean and the ocean's upper layer is an important element in wave dynamics. The spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling, refraction, energy transfer between waves, and wave dissipation. Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface. The first ocean wave models were developed in the 1960s and 1970s. These models had the tendency to overestimate the role of wind in wave development and underplayed wave interactions. A lack of knowledge concerning how waves interacted among each other, assumptions regarding a maximum wave height, and deficiencies in computer power limited the performance of the models. After experiments were performed in 1968, 1969, and 1973, wind input from the Earth's atmosphere was weighted more accurately in the predictions. A second generation of models was developed in the 1980s, but they could not realistically model swell nor depict wind-driven waves (also known as wind waves) caused by rapidly changing wind fields, such as those within tropical cyclones. This caused the development of a third generation of wave models from 1988 onward. Within this third generation of models, the spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling, refraction, energy transfer between waves, and wave dissipation. Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface. Observing platforms Weather ships The idea of a stationary weather ship was proposed as early as 1921 by Météo-France to help support shipping and the coming of transatlantic aviation. Established during World War II, a weather ship, or ocean weather vessel, was a ship stationed in the ocean as a platform for surface and upper air meteorological observations for use in weather forecasting. They were used during World War II but had no means of defense, which led to the loss of several ships and many lives. They were primarily located in the north Atlantic and north Pacific oceans, reporting via radio. In addition to their weather reporting function, these vessels aided in search and rescue operations, supported transatlantic flights, acted as research platforms for oceanographers, monitored marine pollution, and aided weather forecasting both by weather forecasters and within computerized atmospheric models. Research vessels remain heavily used in oceanography, including physical oceanography and the integration of meteorological and climatological data in Earth system science. The establishment of weather ships proved to be so useful during World War II that the International Civil Aviation Organization (ICAO) had established a global network of 13 weather ships by 1948, with seven operated by the United States, one operated jointly by the United States and Canada, two supplied by the United Kingdom, one maintained by France, one a joint venture by the Netherlands and Belgium, and one shared by the United Kingdom, Norway, and Sweden. This number was eventually negotiated down to nine. The agreement of the use of weather ships by the international community ended in 1985. Weather buoys Weather buoys are instruments which collect weather and ocean data within the world's oceans, as well as aid during emergency response to chemical spills, legal proceedings, and engineering design. Moored buoys have been in use since 1951, while drifting buoys have been used since 1972. Moored buoys are connected with the ocean bottom using either chains, nylon, or buoyant polypropylene. With the decline of the weather ship, they have taken a more primary role in measuring conditions over the open seas since the 1970s. During the 1980s and 1990s, a network of buoys in the central and eastern tropical Pacific Ocean helped study the El Niño-Southern Oscillation. Moored weather buoys range from to in diameter, while drifting buoys are smaller, with diameters of to . Drifting buoys are the dominant form of weather buoy in sheer number, with 1250 located worldwide. Wind data from buoys has smaller error than that from ships. There are differences in the values of sea surface temperature measurements between the two platforms as well, relating to the depth of the measurement and whether or not the water is heated by the ship which measures the quantity. Weather satellites In use since 1960, the weather satellite is a type of satellite that is primarily used to monitor the weather and climate of the Earth. Satellites can be polar orbiting, covering the entire Earth asynchronously, or geostationary, hovering over the same spot on the equator. Meteorological satellites see more than clouds and cloud systems. Beginning with the Nimbus 3 satellite in 1969, temperature information through the atmospheric column began to be retrieved by satellites from the eastern Atlantic and most of the Pacific Ocean, which led to significant forecast improvements. City lights, fires, effects of pollution, auroras, sand and dust storms, snow cover, ice mapping, boundaries of ocean currents, energy flows, etc., and other types of environmental information are collected using weather satellites. Other environmental satellites can detect changes in the Earth's vegetation, sea state, ocean color, and ice fields. El Niño and its effects on weather are monitored daily from satellite images. Collectively, weather satellites flown by the U.S., Europe, India, China, Russia, and Japan provide nearly continuous observations for a global weather watch. Utility Commercial and recreational use of waterways can be limited significantly by wind direction and speed, wave periodicity and heights, tides, and precipitation. These factors can each influence the safety of marine transit. Consequently, a variety of codes have been established to efficiently transmit detailed marine weather forecasts to vessel pilots via radio, for example the MAFOR (marine forecast). Typical weather forecasts can be received at sea through the use of RTTY, Navtex and Radiofax. NCEP Products available Marine weather warnings and forecasts in print and prognostic chart formats are produced for up five days into the future. Forecasts in printed form include the High Seas Forecast, Offshore Marine Forecasts, and Coastal Waters Forecasts. To help shorten the length of the forecast products, single words and phrases are used to describe areas out at sea. Experimental gridded significant wave height forecasts began being produced by the Ocean Prediction Center in 2006, a first step toward digital marine service for high seas and offshore areas. Additional gridded products such as surface pressure and winds are under development. Recently, National Weather Service operational extratropical storm surge model output to provide experimental extratropical storm surge guidance for coastal weather forecast offices to assist them in coastal flood warning and forecast operations. Responsible organizations and their areas Northern Hemisphere Within the Japan Meteorological Agency, marine observatories are seated in Hakodate, Maizuru, Kobe and Nagasaki. These stations observe ocean waves, tide levels, sea surface temperature and ocean current etc. in the Northwestern Pacific basin, as well as the Sea of Japan and the Sea of Okhotsk basin, and provide marine meteorological forecasts resulted from them, in cooperation with the Hydrographic and Oceanographic Department, Japan Coast Guard. Within the United Kingdom, the Shipping Forecast is a BBC Radio broadcast of weather reports and forecasts for the seas around the coasts of the British Isles. It is produced by the Met Office and broadcast four times per day by BBC Radio 4 on behalf of the Maritime and Coastguard Agency. The forecasts sent over the Navtex system use a similar format and the same sea areas. The waters around the British Isles are divided into sea areas, also known as weather areas. Within the United States National Weather Service, the Ocean Prediction Center (OPC), established in 1995, is one of the National Centers for Environmental Prediction’s (NCEP's) original six service centers. Until January 12, 2003, the name of the organization was the Marine Prediction Center. The OPC issues forecasts up to five days in advance for ocean areas north of 31 north latitude and west of 35 west longitude in the Atlantic, and across the northeast Pacific north of 30 north latitude and east of 160 east longitude. Until recently, the OPC provided forecast points for tropical cyclones north of 20 north latitude and east of the 60 west longitude to the National Hurricane Center. OPC is composed of two branches: the Ocean Forecast Branch and the Ocean Applications Branch. The National Hurricane Center covers marine areas south of the 31st parallel in the Atlantic and 30th parallel in the Pacific between the 35th meridian west and 140th meridian west longitude. The Honolulu Weather Service Forecast Office forecasts within the area between the 140th meridian west and the 160th meridian east, from the 30th parallel north down to equator. Southern Hemisphere The National Hurricane Center's area of responsibility includes Southern Hemisphere areas in the Pacific down to 18.5 degrees south eastward of the 120th meridian west. South of the equation, the NWS Honolulu Forecast Office forecasts southward to the 25th parallel south between the 160th meridian east and the 120th meridian west.
Physical sciences
Meteorology: General
Earth science
45036001
https://en.wikipedia.org/wiki/Held%E2%80%93Karp%20algorithm
Held–Karp algorithm
The Held–Karp algorithm, also called the Bellman–Held–Karp algorithm, is a dynamic programming algorithm proposed in 1962 independently by Bellman and by Held and Karp to solve the traveling salesman problem (TSP), in which the input is a distance matrix between a set of cities, and the goal is to find a minimum-length tour that visits each city exactly once before returning to the starting point. It finds the exact solution to this problem, and to several related problems including the Hamiltonian cycle problem, in exponential time. Algorithm description and motivation Number the cities , with designated arbitrarily as a "starting" city (since the solution to TSP is a Hamiltonian cycle, the choice of starting city doesn't matter). The Held–Karp algorithm begins by calculating, for each set of cities and every city not contained in , the shortest one-way path from to that passes through every city in in some order (but not through any other cities). Denote this distance , and write for the length of the direct edge from to . We'll compute values of starting with the smallest sets and finishing with the largest. When has two or fewer elements, then calculating requires looking at one or two possible shortest paths. For example, is simply , and is just the length of . Likewise, is the length of either or , whichever is shorter. Once contains three or more cities, the number of paths through rises quickly, but only a few such paths need to be examined to find the shortest. For instance, if is shorter than , then must be shorter than , and the length of is not a possible value of . Similarly, if the shortest path from through to is , and the shortest path from through to ends with the edge , then the whole path from to must be , and not any of the other five paths created by visiting in a different order. More generally, suppose is a set of cities. For every integer , write for the set created by removing from . Then if the shortest path from through to has as its second-to-last city, then removing the final edge from this path must give the shortest path from to through . This means there are only possible shortest paths from to through , one for each possible second-to-last city with length , and . This stage of the algorithm finishes when is known for every integer , giving the shortest distance from city to city that passes through every other city. The much shorter second stage adds these distances to the edge lengths to give possible shortest cycles, and then finds the shortest. The shortest path itself (and not just its length), finally, may be reconstructed by storing alongside the label of the second-to-last city on the path from to through , raising space requirements by only a constant factor. Algorithmic complexity The Held–Karp algorithm has exponential time complexity , significantly better than the superexponential performance of a brute-force algorithm. Held–Karp, however, requires space to hold all computed values of the function , while brute force needs only space to store the graph itself. Time Computing one value of for a -element subset of requires finding the shortest of possible paths, each found by adding a known value of and an edge length from the original graph; that is, it requires time proportional to . There are -element subsets of ; and each subset gives possible values of . Computing all values of where thus requires time , for a total time across all subset sizes . The second stage of the algorithm, finding a complete cycle from candidates, takes time and does not affect asymptotic performance. For undirected graphs, the algorithm can be stopped early after the step, and finding the minimum for every , where is the complement set of . This is analogous to a bidirectional search starting at and meeting at midpoint . However, this is a constant factor improvement and does not affect asymptotic performance. Space Storing all values of for subsets of size requires keeping values. A complete table of values of thus requires space . This assumes that is sufficiently small enough such that can be stored as a bitmask of constant multiple of machine words, rather than an explicit k-tuple. If only the length of the shortest cycle is needed, not the cycle itself, then space complexity can be improved somewhat by noting that calculating for a of size requires only values of for subsets of size . Keeping only the values of where has size either or reduces the algorithm's maximum space requirements, attained when , to . Pseudocode Source: function algorithm TSP (G, n) is for k := 2 to n do g({k}, k) := d(1, k) end for for s := 2 to n−1 do for all S ⊆ {2, ..., n}, |S| = s do for all k ∈ S do g(S, k) := minm≠k,m∈S [g(S\{k}, m) + d(m, k)] end for end for end for opt := mink≠1 [g({2, 3, ..., n}, k) + d(k, 1)] return (opt) end function Related algorithms Exact algorithms for solving the TSP Besides Dynamic Programming, Linear programming and Branch and bound are design patterns also used for exact solutions to the TSP. Linear programming applies the cutting plane method in integer programming, i.e. solving the LP formed by two constraints in the model and then seeking the cutting plane by adding inequality constraints to gradually converge at an optimal solution. When people apply this method to find a cutting plane, they often depend on experience, so this method is seldom used as a general method. The term branch and bound was first used in 1963 in a paper published by Little et al. on the TSP, describing a technique of combining smaller search spaces and establishing lower bounds to enlarge the practical range of application for an exact solution. The technique is useful for expanding the number of cities able to be considered computationally, but still breaks down in large-scale data sets. It controls the searching process through applying restrictive boundaries, allowing a search for the optimal solution branch from the space state tree to find an optimal solution as quickly as possible. The pivotal component of this algorithm is the selection of the restrictive boundary. Different restrictive boundaries may form different branch-bound algorithms. Approximate algorithms for solving the TSP As the application of precise algorithm to solve problem is very limited, we often use approximate algorithm or heuristic algorithm. The result of the algorithm can be assessed by C / C* ≤ ε . C is the total travelling distance generated from approximate algorithm; C* is the optimal travelling distance; ε is the upper limit for the ratio of the total travelling distance of approximate solution to optimal solution under the worst condition. The value of ε >1.0. The more it closes to 1.0, the better the algorithm is. These algorithms include: Interpolation algorithm, Nearest neighbour algorithm, Clark & Wright algorithm, Double spanning tree algorithm, Christofides algorithm, Hybrid algorithm, Probabilistic algorithm (such as Simulated annealing).
Mathematics
Graph theory
null
23442715
https://en.wikipedia.org/wiki/Videocassette%20recorder
Videocassette recorder
A videocassette recorder (VCR) or video recorder is an electromechanical device that records analog audio and analog video from broadcast television or other AV sources and can play back the recording after rewinding. The use of a VCR to record a television program to play back at a more convenient time is commonly referred to as time shifting. VCRs can also play back prerecorded tapes, which were widely available for purchase and rental starting in the 80s and 90s, most popularly in the VHS videocassette format. Blank tapes were sold to make recordings. VCRs declined in popularity during the 2000s and in 2016, Funai Electric, the last remaining manufacturer, ceased production. History Early machines and formats The history of the videocassette recorder follows the history of videotape recording in general. Ampex introduced the quadruplex videotape professional broadcast standard format with its Ampex VRX-1000 in 1956. It became the world's first commercially successful videotape recorder using two-inch (5.1 cm) wide tape. Due to its high price of (), the Ampex VRX-1000 could be afforded only by the television networks and the largest individual stations. In 1959, Toshiba introduced a new method of recording known as helical scan, releasing the first commercial helical scan video tape recorder that year. It was first implemented in reel-to-reel videotape recorders (VTRs), and later used with cassette tapes. In 1963, Philips introduced its EL3400 1-inch helical scan recorder, aimed at the business and domestic user, and Sony marketed the 2" PV-100, its first reel-to-reel VTR, intended for business, medical, airline, and educational use. First home video recorders The Telcan (Television in a Can), produced by the UK Nottingham Electronic Valve Company in 1963, was the first home video recorder. It was developed by Michael Turner and Norman Rutherford. It could be purchased as a unit or in kit form for £1,337 (). There were several drawbacks as it was expensive, not easy to assemble, and could record only 20 minutes at a time. It recorded in black-and-white, the only format available in the UK at the time as color broadcasts were not available until BBC Two began broadcasting in color in 1967. An original Telcan Domestic Video Recorder can be seen at the Nottingham Industrial Museum. The half-inch tape Sony model CV-2000, first marketed in 1965, was its first VTR intended for home use. It was the first fully transistorized VCR. The development of the videocassette followed the replacement by cassette of other open reel systems in consumer items: the Stereo-Pak four-track audio cartridge in 1962, the compact audio cassette and Instamatic film cartridge in 1963, the 8-track cartridge in 1965, and the Super 8 home movie cartridge in 1966. In 1972, videocassettes of movies became available for home use through Cartrivision. The format never became widely popular because recorders were expensive (retailing for $1,350 ()) and players were not available as standalone units. Cassettes intended for home use were encased in black plastic, and could be rewound by a home recorder, whereas rental cassettes could not be rewound, and had to be returned to the retailer in order to be rewound. Sony U-matic Sony demonstrated a videocassette prototype in October 1969, then set it aside to work out an industry standard by March 1970 with seven fellow manufacturers. The result, the Sony U-matic system, introduced in Tokyo in September 1971, was the world's first commercial videocassette format. Its cartridges, resembling larger versions of the later VHS cassettes, used 3/4-inch (1.9 cm)-wide tape and had a maximum playing time of 60 minutes, later extended to 80 minutes. Sony also introduced two machines (the VP-1100 videocassette player and the VO-1700, also called the VO-1600 video-cassette recorder) to use the new tapes. U-matic, with its ease of use, quickly made other consumer videotape systems obsolete in Japan and North America, where U-matic VCRs were widely used by television newsrooms (Sony BVU-150 and Trinitron DXC 1810 video camera), schools, and businesses. But the high cost – for a combination TV/VCR – kept it out of most homes. Philips "VCR" format In 1970, Philips developed a home video cassette format specially made for a TV station in 1970 and available on the consumer market in 1972. Philips named this format "Video Cassette Recording" (although it is also referred to as "N1500", after the first recorder's model number). Mass-market success The industry boomed in the 1980s as more and more customers bought VCRs. By 1982, 10% of households in the United Kingdom owned a VCR. The figure reached 30% in 1985 and by the end of the decade well over half of British homes owned a VCR. VHS vs. Betamax Two major standards, Sony's Betamax (also known as Betacord or just Beta) and JVC's VHS (Video Home System), competed for sales in what became known as the format war. Betamax was first to market in November 1975, and was argued by many to be technically more sophisticated in recording quality. Legal challenges In the early 1980s US film companies fought to suppress the VCR in the consumer market, citing concerns about copyright violations. In Congressional hearings, Motion Picture Association of America head Jack Valenti decried the "savagery and the ravages of this machine" and likened its effect on the film industry and the American public to the Boston strangler: In the case Sony Corp. of America v. Universal City Studios, Inc., the Supreme Court of the United States ruled that the device was allowable for private use. Subsequently the film companies found that making and selling video recordings of their productions had become a major income source. Shortcomings Video cassette recorders are sensitive to changes in temperature and humidity. If the machine (or tape) is moved from a hot to a colder environment, there could be condensation of moisture on the internal parts, such as the rotating video head drum. Some later models were equipped with a dew warning which would prevent operation in this case, but it could not detect moisture on the surface of a tape. The presence of moisture between the tape and the rotating head drum increases friction which prevents correct operation and can cause damage to both the recording device and the tape. In extreme cases, if the dew sensor fails to function and stop the video recorder, moisture can cause the tape to stick to the spinning video head. This can pull a large amount of tape from the cassette before the head drum stops spinning. The tape will be extensively damaged, the video heads will often become clogged, and the mechanism may be unable to eject the cassette. The dew sensor itself is mounted very close to the video head drum. Contrary to how one might expect this to behave, the sensor increases its resistance when moisture is present. Poor contacts on the sensor can therefore be a cause of random dew sensor warnings. Usually, a "DEW" indicator or error code lights up on the display of most VCRs/camcorders, and on some, a buzzer may sound. Magnetic tapes could be mechanically damaged when ejected from the machine due to moisture or other problems. Rubber drive belts and rollers hardened with age, causing malfunctions. Decline Around the late '90s and early 2000s, DVDs became the first universally successful optical medium for playback of pre-recorded video, as it gradually overtook VHS to become the most popular consumer format. DVD recorders and other digital video recorders dropped rapidly in price, making the VCR obsolete. DVD rentals in the United States first exceeded those of VHS in June 2003. The declining market, combined with a US FCC mandate effective March 1, 2007, that all new TV tuners in the US include ATSC and QAM support, encouraged major electronics manufacturers to end production of standalone units, with VCR/DVD combo decks being made since then; most of them then can only record from external baseband sources (usually composite video), including CECBs which (by NTIA mandate) all have composite outputs, as well as those ATSC tuners (including TVs) and cable boxes that come with composite outputs; some combo units that allow recording to DVD do include an ATSC tuner built into them. JVC did ship one model of D-VHS deck with a built-in ATSC tuner, the HM-DT100U, but it remains extremely rare, and therefore expensive. In July 2016, Funai Electric, the last remaining manufacturer of VHS VCR/DVD combo recorders, announced it would cease production of VHS recorders by the end of the month. As a result of winning the format war over HD DVD, the new high definition optical disc format Blu-ray Disc was expected to replace the DVD format. However, with many homes still having a large supply of VHS tapes and with all Blu-ray players designed to play regular DVDs and CDs by default, some manufacturers began to make VCR/Blu-ray combo players. Quality Due to the path followed by the video and Hi-Fi audio heads being striped and discontinuous—unlike that of the linear audio track—head-switching is required to provide a continuous audio signal. While the video signal can easily hide the head-switching point in the invisible vertical retrace section of the signal, so that the exact switching point is not very important, the same is obviously not possible with a continuous audio signal that has no inaudible sections. Hi-Fi audio is thus dependent on a much more exact alignment of the head switching point than is required for non-HiFi VHS machines. Misalignments may lead to imperfect joining of the signal, resulting in low-pitched buzzing. Variants Most camcorders produced in the 20th century also feature an integrated VCR. Generally, they include neither a timer nor a TV tuner. Most of these use smaller format videocassettes, such as 8 mm, VHS-C, or MiniDV, although some early models supported full-size VHS and Betamax. In the 21st century, digital recording became the norm while videocassette tapes dwindled away gradually; tapeless camcorders use other storage media such as DVDs, or internal flash memory, hard drive, and SD card.
Technology
Media and communication: Basics
null
23446323
https://en.wikipedia.org/wiki/MeerKAT
MeerKAT
MeerKAT, originally the Karoo Array Telescope, is a radio telescope consisting of 64 antennas in the Meerkat National Park, in the Northern Cape of South Africa. In 2003, South Africa submitted an expression of interest to host the Square Kilometre Array (SKA) Radio Telescope in Africa, and the locally designed and built MeerKAT was incorporated into the first phase of the SKA. MeerKAT was launched in 2018. MeerKAT is located inside a radio quiet zone within the park. Along with the Hydrogen Epoch of Reionization Array (HERA), also in South Africa, and two radio telescopes in Western Australia, the Australian SKA Pathfinder (ASKAP) and the Murchison Widefield Array (MWA), the MeerKAT is one of four precursors to the final SKA. History MeerKAT is a precursor for the SKA-mid array, as are the Hydrogen Epoch of Reionization Array (HERA), the Australian SKA Pathfinder (ASKAP) and the Murchison Widefield Array (MWA). Description It is located on the SKA site in the Karoo, and is a pathfinder for SKA-mid technologies and science. It was designed by engineers within the South Africa Radio Astronomy Observatory and South African industries, and most of the hardware and software was sourced in South Africa. It comprises 64 antennas, each 13.5m in diameter, equipped with cryogenic receivers. The antennas have positions for four receivers, and one of the three vacant positions will be filled by S-band receivers provided by the Max Planck Institute for Radio Astronomy (MPIfR). The array configuration has 61% of the antennas located within a 1 km diameter circle, and the remaining 39% distributed out to a radius of 4 km. The receiver outputs are digitised immediately at the antenna, and the digital data streams are transported to the Karoo Array Processor Building (KAPB) via buried optical fibres. The antenna signals are processed by the Correlator/Beamformer (CBF) digital signal processor. Data from the CBF is passed on to the Science Processor computer cluster and disk storage modules. The MeerKAT antenna data is also made available to a number of user-supplied digital backends via the CBF, including pulsar and fast radio burst (FRB) search engines, a precision pulsar timing system, and a SETI signal processor. A time and frequency reference (TFR) system provides clock and absolute time signals required by the digitisers and other telescope subsystems. This TFR system comprises two hydrogen maser clocks, two rubidium atomic clocks, a precise crystal oscillator, and a set of GNSS receiver systems for time transfer with UTC. The massive computing and digital signal-processing systems located at the KAPB are housed in a large shielded chamber (or Faraday cage) to prevent radio signals from the equipment interfering with the sensitive radio receivers. The KAPB itself is partially buried below ground level to provide additional radio frequency interference (RFI) protection, and to provide temperature stability. The KAPB also houses a power conditioning facility for the entire site, including three diesel rotary UPS units that provide an uninterrupted power supply to the whole site. A long-haul optical fibre transfers data from the KAPB to the Centre for High Performance Computing (CHPC) and SARAO office in Cape Town, and provides a control and monitoring link to the SARAO operations centre in Cape Town. Telescope data processing and reduction is executed on compute facilities provided by the MeerKAT SP systems, and on other high performance computer facilities provides by MeerKAT users. Specifications MeerKAT inaugurated in July 2018 consists of 64 dishes of 13.5 metres in diameter each with an offset Gregorian configuration. An offset dish configuration has been chosen because its unblocked aperture provides uncompromised optical performance and sensitivity, excellent imaging quality and good rejection of unwanted radio frequency interference from satellites and terrestrial transmitters. It also facilitates the installation of multiple receiver systems in the primary and secondary focal areas and is the reference design for the mid-band SKA concept. MeerKAT supports a wide range of observing modes, including deep continuum, polarisation and spectral line imaging, pulsar timing and transient searches. A range of standard data products are provided, including an imaging pipeline. A number of "data spigots" are also available to support user-provided instrumentation. Significant design and qualification efforts are planned to ensure high reliability to achieve low operational cost and high availability. MeerKAT's 64 dishes are distributed over two components: A dense inner component containing 70% of the dishes. These are distributed in a two-dimensional fashion with a Gaussian distribution with a mean dispersion of 300 m, a shortest baseline of 29 m and a longest baseline of 1 km. An outer component containing 30% of the dishes. These are also distributed in a two-dimensional Gaussian distribution with a mean dispersion of 2,500 m and a longest baseline of 8 km. Construction schedule To acquire experience in the construction of interferometric telescopes, members of the Karoo Array Telescope constructed the Phased Experimental Demonstrator (PED) at the South African Astronomical Observatory in Cape Town between 2005 and 2007. During 2007, the eXperimental Development Model Telescope (XDM) was built at the Hartebeesthoek Radio Astronomy Observatory to serve as a testbed for MeerKAT. Construction of the MeerKAT Precursor Array (MPA – also known as KAT-7), on the site started in August 2009. In April 2010 four of the seven first dishes were linked together as an integrated system to produce its first interferometric image of an astronomical object. In Dec 2010, there was a successful detection of very long baseline interferometry (VLBI) fringes between the Hartebeesthoek Radio Astronomy Observatory 26 m dish and one of the KAT-7 dishes. Despite original plans to complete MeerKAT by 2012, construction was suspended in late 2010 due to budget restructure. Science Minister Naledi Pandor denied the suspension marked any setback to the SKA project or 'external considerations'. MeerKAT construction received no funding in 2010/11 and 2011/12. The 2012 South African National Budget projected that just 15 MeerKAT antennas would be completed by 2015. The last of the reinforced concrete foundations for the MeerKAT antennas was completed on 11 February 2014. Almost 5000 m3 of concrete and over 570 tonnes of steel were used to build the 64 bases over a 9-month period. MeerKAT is planned to be completed in three phases. The first phase will include all the antennas but only the first receiver will be fitted. A processing bandwidth of 750 MHz is available. For the second and third phases, the remaining two receivers will be fitted and the processing bandwidth will be increased to at least 2 GHz, with a goal of 4 GHz. With construction of all sixty-four MeerKAT antennas complete, verification tests have begun to ensure the instruments are functioning correctly. Following this, MeerKAT will be commissioned in the second half of 2018 with the array then coming online for science operations. Inauguration On 13 July 2018, the Deputy President of South Africa, David Mabuza, inaugurated the MeerKAT Telescope, and unveiled an image produced by MeerKAT that revealed unprecedented detail of the region surrounding the supermassive black hole at the centre of our Milky Way Galaxy. The 64 MeerKAT antennas will be incorporated into Phase 1 of the SKA Mid Frequency Array once the 133 SKA dishes have been built and commissioned on the Karoo site, resulting in a total of 197 antennas for the SKA array. All of the infrastructure currently associated with MeerKAT will be transferred to the SKA array. The KAPB has the capacity to house the additional equipment required by SKA Mid. Science objectives The science objectives of the MeerKAT surveys are in line with the prime science drivers for the first phase of the SKA, confirming MeerKAT's designation as an SKA precursor instrument. Five years of observing time on MeerKAT have been allocated to leading astronomers who have applied for time to do research. Site The South African Department of Science and Technology, through the NRF and SARAO, has invested more than R760 million in infrastructure on the South African SKA site. The innovative design and engineering of the infrastructure established for MeerKAT, as well as the RFI-quiet environment, favourable physical site characteristics, and on-site technical expertise has positioned the site in the Karoo as an ideal location for other radio astronomy experiments. The HERA (Hydrogen Epoch of Reionisation Array) radio telescope is one such instrument co-located at the South African SKA site. HERA is designed to detect, for the first time, radio signals from the very first stars and galaxies that formed early in the life of the universe. South African engineers and scientists are working with their colleagues at the University of California Berkeley in the US, and Cambridge University in the UK, to build HERA and exploit its unique and fundamental scientific capabilities. Other experiments which have been constructed at the SA SKA site include PAPER (the Precision Array for Probing the Epoch of Reionization) and the C-BASS (the C-Band All Sky Survey). To ensure long term viability of the Karoo site for the MeerKAT and the SKA, and for other radio astronomy instruments, the South African Parliament passed the Astronomy Geographic Advantage Act, in 2007. The act gives the Minister of Science and Technology the authority to protect areas, through regulations, that are of strategic national importance for astronomy and related scientific endeavours. Discoveries In September 2019, an international team of astronomers using South Africa’s MeerKAT radio telescope discovered enormous balloon-like structures that tower hundreds of light-years above and below the centre of our galaxy. South Africa and SKA science and technology The experience gained by South African engineers in the design and construction of MeerKAT had been carried over to the SKA design, reducing risks and development costs. South African engineers within SARAO and South African industrial partners have participated in 7 of the 11 SKA engineering design consortia, contributing about 10% of the workforce in these internationally distributed consortia. The Infrastructure South Africa Consortium and the Assembly, Integration, Verification (AIV) Consortium have been led by SARAO, and there was South African participation in the DISH Consortium, Science Data Processor (SDP) Consortium, the Signal and Data Transport (SaDT Consortium), the Telescope Manager (TM) Consortium and the Mid-frequency Aperture Array Consortium. South African engineers have overseen the system engineering aspects of 5 of the consortia. SARAO has signed an MoU with the SKAO to provide resources to the Bridging Activities that will continue the development of SKA subsystems now that the consortia have concluded their work. Participation by South African industrial partners in previous consortium work and future Bridging Activities is facilitated by SARAO through the Financial Assistance Programme (FAP) funding initiative. Scientists from SARAO and South African universities are well represented on the various SKA Science Working Groups (SWGs), with about 10% of the authors of papers in the SKA Science Book having South African institution affiliations. The MeerKAT Large Science Projects (LSPs) are closely aligned with the SKA science case, and there is a large membership overlap between the LSP teams and the associated SWGs. Capacity development for radio astronomy in Africa To create the required skills to design, construct and operate the SKA and MeerKAT telescopes, and to make optimal use of these radio telescopes for research, once commissioned, SARAO initiated a capacity development programme, in 2005. The programme is fully integrated into the operations of SARAO, and it is crafted to develop and retain the researchers, engineers and artisans required to ensure that the MeerKAT and SKA will be successful in South Africa. To date the programme has provided more than 1000 scholarships and fellowships across all relevant academic levels, and for a range of relevant qualifications. The programme is coveted by academic colleagues from abroad because of its success in developing, from a low base, significant expertise in radio astronomy over the past 14 years. African Very Long Baseline Interferometry Network (AVN) The African Very Long Baseline Interferometry (VLBI) Network (AVN) is an important development towards building SKA on the African Continent. The AVN programme will transfer skills and knowledge in the SKA African partner countries (Botswana, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Namibia, and Zambia) to build, maintain, operate and use radio telescopes. MeerKAT will also participate in global VLBI operations with all major radio astronomy observatories around the world and will add considerably to the sensitivity of the global VLBI network. Further potential science objectives for MeerKAT are to participate in the search for extraterrestrial intelligence and collaborate with NASA on downloading information from space probes.
Technology
Ground-based observatories
null
33548254
https://en.wikipedia.org/wiki/Twitch%20%28service%29
Twitch (service)
Twitch is an American video live-streaming service popular in video games, including broadcasts of esports competitions. It also offers music broadcasts, creative content, and "in real life" streams. Twitch is operated by Twitch Interactive, a subsidiary of Amazon. It was introduced in June 2011 as a spin-off of the general-interest streaming platform Justin.tv. Content on the site can be viewed either live or via video on demand. The games shown on Twitch's homepage are listed according to audience preference and include genres such as real-time strategy games, fighting games, racing games, and first-person shooters. The popularity of Twitch eclipsed that of Justin.tv. In October 2013, the website had 45 million unique viewers, and by February 2014, it was considered the fourth-largest source of peak Internet traffic in the United States. At the same time, Justin.tv's parent company was re-branded as Twitch Interactive to represent the shift in focus when Justin.tv was getting shut down in August 2014. The same month, the service was acquired by Amazon for 970 million, which later led to the introduction of synergies with the company's subscription service Amazon Prime. By 2015, Twitch had more than 100 million viewers per month. In 2017, Twitch remained the leading live-streaming video service for video games in the US, and had an advantage over YouTube Gaming, which shut down its standalone app in May 2019. it had three million broadcasters monthly and 15 million active users daily, with 1.4 million average concurrent users. Twitch had over 27,000 partner channels. Twitch was the 37th-most-visited website in the world with 20.26% of its traffic coming from the United States, followed by Germany with 7.08% and South Korea with 5.49%. In late 2023, Twitch announced that they would stop operating in South Korea in 2024 because of its network fee policy, citing prohibitive costs. History Founding and initial growth (2007–2013) When Justin.tv was launched in 2007 by Justin Kan and Emmett Shear, two recent Yale graduates, the site was divided into several content categories. The gaming category grew especially fast, and became the most popular content on the site. In June 2011, the company decided to spin off the gaming content as TwitchTV, inspired by the term twitch gameplay. It launched officially in public beta on June 6, 2011. Since then, Twitch has attracted more than 35 million unique visitors a month. Twitch had about 80 employees in June 2013, which increased to 100 by December 2013. The company was headquartered in San Francisco's Financial District. Twitch has been supported by significant investments of venture capital, with in 2012 (on top of originally raised for Justin.tv), and in 2013. Investors during three rounds of fund raising leading up to the end of 2013 included Draper Associates, Bessemer Venture Partners and Thrive Capital. In addition to the influx of venture funding, it was believed in 2013 that the company had become profitable. Especially since the shutdown of its direct competitor Own3d.tv in early 2013, Twitch has become the most popular e-sports streaming service by a large margin, leading some to conclude that the website has a "near monopoly on the market". Competing video services, such as YouTube and Dailymotion, began to increase the prominence of their gaming content to compete, but have had a much smaller impact so far. As of mid-2013, there were over 43 million viewers on Twitch monthly, with the average viewer watching an hour and a half a day. By February 2014, Twitch was the fourth largest source of Internet traffic during peak times in the United States, behind Netflix, Google, and Apple. Twitch made up 1.8% of total US Internet traffic during peak periods. In late 2013, particularly due to increasing viewership, Twitch had issues with lag and low frame rates in Europe. Twitch has subsequently added new servers in the region. Also in order to address these problems, Twitch implemented a new video system shown to be more efficient than the previous system. Initially, the new video system was criticized by users because it caused a significant stream delay, interfering with broadcaster–viewer interaction. Twitch staff said that the increased delay was likely temporary and at the time, was an acceptable tradeoff for the decrease in buffering. Growth, YouTube acquisition speculation (2014) On February 10, 2014, Twitch's parent company (Justin.tv, Inc.) was renamed Twitch Interactive, reflecting the increased prominence of the service over Justin.tv as the company's main business. That same month, a stream known as Twitch Plays Pokémon, a crowdsourced attempt to play Pokémon Red using a system translating chat commands into game controls, went viral. By February 17, the channel reached over 6.5 million total views and averaged concurrent viewership between 60 and 70,000 viewers with at least 10% participating. Vice President of Marketing Matthew DiPietro praised the stream as "one more example of how video games have become a platform for entertainment and creativity that extends WAY beyond the original intent of the game creator. By merging a video game, live video and a participatory experience, the broadcaster has created an entertainment hybrid custom made for the Twitch community. This is a wonderful proof of concept that we hope to see more of in the future." Beginning with its 2014 edition, Twitch was made the official live-streaming platform of the Electronic Entertainment Expo. On May 18, 2014, Variety first reported that Google had reached a preliminary deal to acquire Twitch through its YouTube subsidiary for approximately . On August 5, 2014, the original Justin.tv site suddenly ceased operations, citing a need to focus resources entirely on Twitch. On August 6, 2014, Twitch introduced an updated archive system, with multi-platform access to highlights from past broadcasts by a channel, higher quality video, increased server backups, and a new Video Manager interface for managing past broadcasts and compiling "highlights" from broadcasts that can also be exported to YouTube. Due to technological limitations and resource requirements, the new system contained several regressions; the option to archive complete broadcasts on an indefinite basis ("save forever") was removed, meaning that they can only be retained for a maximum of 14 days, or 60 for partners and Turbo subscribers. While compiled highlights can be archived indefinitely, they were limited to two hours in length. In addition, Twitch introduced a copyright fingerprinting system that would mute audio in archived clips if it detected a copyrighted song in the stream. Amazon subsidiary (2014–present) On August 25, 2014, Amazon acquired Twitch Interactive for in an all-cash deal. Sources reported that the rumoured Google deal had fallen through and allowed Amazon to make the bid, with Forbes reporting that Google had backed out of the deal due to potential antitrust concerns surrounding it and its existing ownership of YouTube. The acquisition closed on September 25, 2014. Take-Two Interactive, which owned a 2% stake at the time of the acquisition, made a windfall of $22 million. Under Amazon, Shear continued as chief executive officer of Twitch Interactive, with Sara Clemens added to the executive team as chief operating officer in January 2018. Shear touted the Amazon Web Services platform as an "attractive" aspect of the deal, and that Amazon had "built relationships with the big players in media", which could be used to the service's advantage—particularly in the realm of content licensing. The purchase of Twitch marked the third recent video gaming–oriented acquisition by Amazon, which had previously acquired the developers Reflexive Entertainment and Double Helix Games. On December 9, 2014, Twitch announced it had acquired GoodGame Agency, an organisation that owns the esports teams Evil Geniuses and Alliance. In March 2015, Twitch reset all user passwords and disabled all connections to external Twitter and YouTube accounts after the service reported that someone had gained "unauthorised access" to the user information of some Twitch users. In June 2016, Twitch added a new feature known as "Cheering", a special form of emoticon purchased as a microtransaction using an in-site currency known as "Bits". Bits are bought using Amazon Payments, and cheers act as donations to the channel. Users also earn badges within a channel based on how much they have cheered. On August 1, 2016, it was reported that Twitch had signed a lease for 185,000 square feet (17,187 m2) in a new office tower to be constructed at 350 Bush Street in San Francisco. On August 16, 2016, Twitch acquired Curse LLC, an operator of online video gaming communities and gaming-oriented VoIP software. In December 2016, GoodGame Agency was divested by Amazon to their respective members due to conflict of interest concerns. On September 30, 2016, Twitch announced Twitch Prime, a service which provides premium features that are exclusive to users who have an active Amazon Prime subscription. This included advertising-free streaming, monthly offers of free add-on content ("Game Loot"), and game discounts. Games included with the game loot rewards were Apex Legends, Legends of Runeterra, FIFA Ultimate Team, Teamfight Tactics, Mobile Legends: Bang Bang, Doom Eternal, and more. In December 2016, Twitch announced a semi-automated chat moderation tool (), which uses natural language processing and machine learning to set aside potentially unwanted content for human review. In February 2017, Twitch announced the Twitch Game Store, a digital distribution platform that would expose digital purchases of games within the site's browsing interface. When streaming games available on the store, partnered channels could display a referral link to purchase the game—receiving a 5% commission. Users also received a "Twitch Crate" on every purchase, which included Bits and a collection of random chat emotes. In August 2017, Twitch announced it had acquired video indexing platform ClipMine. On August 20, 2018, Twitch announced that it will no longer offer advertising-free access to the entire service to Amazon Prime subscribers, with this privilege requiring the separate "Twitch Turbo" subscription or an individual channel subscription. This privilege ended for new customers effective September 14, 2018, and for existing customers October 2018. In October 2018, Twitch announced Amazon Blacksmith, a new extension allowing broadcasters to configure displays of products associated with their streams with Amazon affiliate links. On November 27, 2018, Twitch discontinued the Game Store service, citing that it did not generate as much additional revenue for partners as they hoped, and new revenue opportunities such as Amazon Blacksmith. Users retain access to their purchased games. On December 12, 2018, Fandom, Inc. had reached an agreement to acquire Curse Media, a spin-off of Curse, from Twitch Interactive for an undisclosed amount. Curse was dissolved and its assets were moved under Twitch Interactive. Twitch's new headquarters at 350 Bush Street opened in August 2019. To comply with historic preservation requirements, the developer kept the front facade of the San Francisco Curb Exchange, but tore down everything behind the facade, and built a reconstruction of the old trading hall through which visitors must walk to reach the modern high-rise office tower behind it. Twitch acquired the Internet Games Database (IGDb), a user-driven website similar in functionality to Internet Movie Database (IMDb) to catalog details of video games in September 2019. Twitch plans to use the database service to improve its own internal search features and help users find games they are interested in. On September 26, 2019, Twitch unveiled a new logo and updated site design. The design is accompanied by a new advertising campaign, "You're already one of us", which will seek to promote the platform's community members. Twitch began signing exclusivity deals with high-profile streamers in December 2019. Twitch introduced a Safety Advisory Council in May 2020, made up from streamers, academics, and think tanks, with a goal to develop guidelines for moderation, work-life balance, and safeguarding the interests of marginalized communities for the platform. The announcement attracted controversy, and CEO Emmett Shear later clarified that the role of the council was purely advisory. On June 22, 2020, Twitch Interactive sold CurseForge to Overwolf for an undisclosed sum. In August 2020, Twitch Prime was renamed Prime Gaming, aligning it closer with the Amazon Prime family of services. In 2020, Twitch sold Union For Gamers to Magic Find. In May 2021, Twitch announced that it would introduce over 350 new tags to categorize streams, including finer tags for gender identity, sexual identity, and disabilities, as well as tags for other types of themes (such as virtual streamers). The disability and LGBT-oriented tags were developed in consultation with the video game accessibility charities AbleGamers and SpecialEffect, and the LGBT organizations GLAAD and The Trevor Project. On October 6, 2021, an anonymous hacker reportedly leaked "the entirety" of Twitch, including its source code of the Twitch client and APIs, and details of the payouts made to almost 2.4 million streamers since August 2019. The user posted a 128GB torrent link to 4chan and said that the leak, which includes source code from almost 6,000 internal Git repositories, is also "part one" of a larger release. The leak also included details of plans for a digital storefront under the codename of "Vapor" meant to be a competitor to Steam along with details on payment received by streamers for their work on Twitch. Twitch verified they had suffered a data leak which they attributed to a server misconfiguration used by a "malicious third party". While Twitch found no indication of login credentials or credit card information to have been taken in the breach, the company reset all stream keys as a precaution. On August 23, 2022, Twitch announced that it would no longer enforce its exclusivity agreement, allowing Twitch streamers to livestream on other streaming platforms. The announcement noted that simulcasting on Twitch and other "Twitch-like" streaming platforms was still prohibited; however, an exemption to the simulcasting restriction was applied to short-form streaming platforms such as Instagram and TikTok. Despite the specific mention of restrictions on simulcasting, former Twitch employees noted that Twitch would likely not enforce the restriction, as doing so would be very difficult, and they had not been enforcing it for several months prior to the announcement. After the announcement, many high-profile streamers who were limited by exclusivity, such as Ninja and Pokimane, started streaming on other platforms. On September 21, 2022, Twitch announced it would be reducing the subscription revenue earned by large streamers. Though most streamers get a 50% of revenue from subscriptions, some larger streamers have premium subscription terms, which give them 70% of subscription revenue. The new change, set to take effect on June 1, 2023, would mean premium streamers would keep 70% of the first $100,000 earned from subscriptions, after which their cut would be lowered to 50%. The announcement came after Twitch declined a popular request for all streamers to have 70% subscription revenue, which many noted is the same revenue already offered by YouTube. Twitch President Dan Clancy justified the change in a statement issued on Twitch's blog, stating it was done to cover Twitch's operating costs, noting the premium 70% split stopped being offered to new streamers over a year prior, and pointing to alternate streamer revenue sources that would not be affected by the subscription revenue reductions, such as Prime Subs or advertisement breaks. Though Clancy claimed 90% of streamers would not be affected by the revenue reduction, the change drew criticism from many streamers, who viewed it as harmful to the security of streaming careers and more beneficial to Twitch and its advertisers than their users, with several streamers expressing doubt at Clancy's claims of Twitch's high operating costs, and noting that Twitch already has alternative revenue sources that make reducing streamer revenue unnecessary. The announcement led to some streamers considering leaving Twitch or organizing boycotts. In December 2022, Amazon Senior Vice President Jeff Blackburn retired and was replaced by Steve Boom as Vice President of Audio, Twitch, and Games. Twitch CEO Dan Clancy reports directly to Boom. In March 2023, Clancy became CEO of Twitch, after previous CEO and Justin.tv co-founder Emmett Shear announced he would step down after 16 years at the company. Both Shear and Clancy have been described as "more product-focused than creator-focused". On March 20, Clancy announced that Twitch would be laying off 400 employees, as part of Amazon-wide layoffs affecting 9,000 workers across the company. On June 6, 2023, Twitch announced restrictions on third-party sponsor placements in streams, including restricting the size of sponsor logos, and prohibiting "burned-in" audio, video, or display advertising. The changes were met with criticism from major streamers such as Asmongold (who threatened to leave the service), Cr1TiKaL, and Zentreya due to their broad wording, concerns that it would impact streamers' existing relationships with advertisers, and their impact on charity and esports events that rely extensively on sponsorship. The service quickly retracted the new branded content policy and announced that it would be clarified, stating that it was intended to "clarify our existing ads policy that was intended to prohibit third party ad networks from selling burned in video and display ads on Twitch, which is consistent with other services", and that Twitch "[does] not intend to limit streamers' ability to enter into direct relationships with sponsors." In August 2023, Twitch began to trial a "Discovery Feed" feature in its mobile apps, populated by "featured" clips from followed users. In October 2023, Twitch began to implement stories. At TwitchCon 2023, Twitch announced upcoming updates to its Guest Star feature (concurrently renamed "Stream Together") to allow for merged chat rooms, and that streamers under an affiliation or partnership agreement with the service (unless contractually required) would be allowed to simulcast their streams on competing platforms such as YouTube, as opposed to only mobile-centric video platforms. On December 6, 2023, Twitch announced that it would exit the South Korea market effective February 27, 2024, citing the prohibitive costs of offering the service in the country. Due to demands from internet service providers that Twitch pay network access fees, Twitch restricted streams to 720p quality in September 2022, and blocked access to video-on-demand (VOD) content (including archived broadcasts and clips) in February 2023. Users in South Korea will no longer be allowed to monetize their streams, and will be offboarded from the affiliate and partnership programs. In February 2024, Twitch was additionally fined ₩435 million ($327,067) by the Korea Telecommunications Commission, deeming Twitch's degradation of service in the country to be unjustified and undermining the interests of users. In January 2024, Twitch announced another mass layoff, affecting 500 employees or 35% of total staff, after previous layoffs in early 2023. The announcement came amid ongoing struggles and ensuing layoffs across the tech and digital media sectors. In October 2024, Twitch's longtime head of music Cindy Charles died. Content Twitch is designed to be a platform for content, including esports tournaments, personal streams of individual players, and gaming-related talk shows. A number of channels do live speedrunning. The Twitch homepage currently displays games based on viewership. some of the most popular games streamed on Twitch are Fortnite, Grand Theft Auto V, League of Legends, Dota 2, PlayerUnknown's Battlegrounds, Hearthstone, Overwatch and Counter-Strike: Global Offensive with a combined total of over 356 million hours watched. Twitch has also made expansions into non-gaming content; such as in July 2013, the site streamed a performance of 'Fester's Feast' from San Diego Comic-Con, and on July 30, 2014, electronic dance music act Steve Aoki broadcast a live performance from a nightclub in Ibiza. In January 2015, Twitch introduced an official category for music streams, such as radio shows and music production activities, and in March 2015, announced that it would become the new official live-streaming partner of the Ultra Music Festival, an electronic music festival in Miami. On October 28, 2015, Twitch launched a second non-gaming category, "Creative", which is intended for streams showcasing the creation of artistic and creative works. To promote the launch, the service also streamed an eight-day marathon of Bob Ross The Joy of Painting. In July 2016, Twitch launched "Social eating" as a beta; it was inspired by the Korean phenomenon of mukbang and Korean players having engaged in the practice as intermissions on their gaming streams. In March 2017, Twitch added an "IRL" category, which is designed for content within Twitch guidelines that does not fall within any of the other established categories on the site (such as lifelogs). GeekWire reported that "while gameplay still makes up the vast majority of the content broadcast via Twitch, the 'Just Chatting' category—a catch-all term that encompasses anything from candid conversation to reality programming—took the top spot by a comfortable margin overall in December [2019]. While the category has been on the rise for the last couple of months, this was the first time that it's actually achieved No. 1 overall for a tracked period on the platform". In 2020, Thrillist described Twitch as "talk radio for the extremely online". Michael Espinosa, for Business Insider in 2021, highlighted that "Twitch dominates the live content space, with 17 billion hours watched last year (per StreamElements), compared to YouTube Gaming Live's 10 billion (per the company). But the vast majority of gaming content is still consumed on-demand, where YouTube is the clear leader with over 100 billion hours watched last year". As a teaching tool Twitch is often used for video game tutorials; the nature of Twitch allows mass numbers of learners to interact with each other and the instructor in real time. Twitch is also used for software development learning, with communities of users streaming programming projects and talking through their work. Charity Broadcasters on Twitch often host streams promoting and raising money towards charity. By 2013, the website has hosted events which, in total, raised over in donations for charitable causes, such as Extra Life 2013. Twitch has raised over US$75 million in donations for charitable causes. The biggest charity event of Twitch is ZEvent, a French project created by Adrien Nougaret and Alexandre Dachary, with more than US$10 million raised for Action Contre la Faim in October 2021. Esports ESL tournaments have aired on Justin.tv and later Twitch.tv since 2009. The platform has also been a longtime broadcaster of the Evolution Championship Series. Twitch has been the official broadcaster of the League of Legends World Championship since 2012, as well as other League of Legends tournaments organized by Riot Games. Dota 2's premier tournament The International has been livestreamed on Twitch since 2013. The platform airs Rocket League tournaments organized by Psyonix since 2016. The ELeague also broadcasts events on Twitch since 2016. Twitch and Blizzard Entertainment signed a two-year deal in June 2017 to make Twitch be the exclusive streaming broadcaster of select Blizzard esports championship events, with viewers under Twitch Prime earning special rewards in various Blizzard games. Twitch also reached a deal in 2018 to be the streaming partner of the Overwatch League, with the site also offering an "All-Access Pass" with exclusive content, emotes, and in-game items for Overwatch. Blizzard switched to rival platform YouTube in 2020. Fortnite Battle Royale competitions have aired on Twitch since its launch in 2017, including the E3 2018 Fortnite Pro-Am and the 2019 Fortnite World Cup. The NBA 2K League has been livestreamed on Twitch since its inception in 2018. As the COVID-19 pandemic suspended motorsports competitions around the world, several series launched sim racing competitions with real-life professional drivers. Some series had official broadcasts on Twitch, such as Formula One and IMSA. Many drivers also had their personal live streams on Twitch, as was the case of several eNASCAR iRacing Pro Invitational Series and INDYCAR iRacing Challenge drivers. Professional sports In December 2017, the National Basketball Association announced that it would stream NBA G League games on Twitch starting on December 15; the broadcasts also include interactive statistics overlays, as well as additional streams of the games with commentary by Twitch personalities. In April 2018, it was announced that Twitch would carry eleven National Football League Thursday Night Football games from 2018 to 2021 in simulcast with Fox, as part of the league's renewed streaming deal with Amazon Prime Video. During the 2017 season, these streams had been exclusive to Amazon Prime subscribers. As part of the broadcasts, Twitch would also offer alternate broadcasts, including broadcasts hosted by Twitch personalities, and NFL Next Live—an interactive broadcast hosted by Andrew Hawkins and Cari Champion. With Thursday Night Football moving exclusively to Amazon Prime Video for the 2022 NFL season, Twitch will continue to carry simulcasts of all games, while the site will also carry alternate broadcasts (such as one featuring Dude Perfect). In January 2019, professional wrestling promotion Impact Wrestling announced that it would stream its weekly show Impact! on Twitch, in simulcast with the television airing on the US cable network Pursuit Channel (co-owned with the promotion's parent company Anthem Sports & Entertainment). On September 5, 2019, the Premier Hockey Federation announced a three-year broadcast rights deal with Twitch, covering all games and league events. The deal also contained an agreement with the Premier Hockey Federation Players' Association for revenue sharing with players, and marked the first time that the NWHL had ever received a rights fee. The National Women's Soccer League announced a three-year deal in March 2020 for Twitch to stream 24 matches per season in the United States and Canada, collaborate on original content, and serve as the rightsholder for all matches outside of the United States and Canada. On June 20, 2020, as an extension of Prime Video's local rights to the league, a plan to air all of the remaining matches of the 2019–20 season (for the resumption of play due to the COVID-19 pandemic and matches being played behind closed doors), and a plan for some of these matches to be carried free-to-air, it was announced that Twitch would stream a package of four Premier League soccer matches within the United Kingdom. On July 16, 2020, US radio broadcaster Entercom announced a partnership to stream video simulcasts of programs from some of their major-market sports talk stations on Twitch channels. On July 22, 2020, Twitch officially launched a Sports category, primarily playing host to content streamed by sports leagues and teams on the platform. The 2021 Copa América association football tournament aired in Spain on Twitch, under a partnership with Gerard Piqué's media company Kosmos and streamer Ibai Llanos. Emotes Twitch features a large number of emotes. There are emotes free for all users, emotes for Turbo users, emotes for Twitch Prime users, and emotes for users who are subscribed to Twitch partners or affiliates. the most used emote is "x0pashL" with 8.85 billion uses, and the most used global emote is "TriHard" with 4.39 billion uses. Twitch partnered broadcasters unlock more "emote slots" as they gain more subscribers up to a maximum of 50 emotes per channel. On January 6, 2021, Twitch announced that they had removed the PogChamp emote, the third most-used emote on the platform in 2018, typically used to express excitement, joy or shock. The decision was made in response to comments from the streamer Ryan "Gootecks" Gutierrez, the face of the emote, supporting civil unrest during the 2021 storming of the United States Capitol for the death of a protestor. Twitch said it would work with the community for a suitable replacement for the emoticon. Twitch later announced that there would be a new PogChamp emote every 24 hours. On February 12, Twitch viewers elected KomodoHype as the new permanent PogChamp emote. Creators and audience Streamers Streamer Ninja had been among Twitch's top personalities, with over 14 million followers. In August 2019, however, Ninja announced that he would move exclusively to a Microsoft-owned competitor, Mixer. After Ninja left, the top three streamers in October 2019 based on follower count were Tfue (7.01 million followers), Shroud (6.45 million followers) and TSM Myth (5.1 million followers). Twitch began signing exclusivity deals with high-profile streamers in December 2019, starting with DrLupo, TimTheTatman, and Lirik, who had a combined 10.36 million followers at the time. Dr DisRespect signed a multi-year deal in March 2020. In May 2020, Twitch signed popular streamers Summit1g, dakotaz and JoshOG to multi-year exclusive deals. On June 26, 2020, Dr DisRespect was banned from Twitch for unexplained reasons and his channel was removed from the site. Following the discontinuation of Mixer in late-July 2020, both Ninja and Shroud (who had also defected to the service) re-signed exclusively with Twitch. there have been eight streamers to have reached over 100,000 concurrent subscribers. These streamers are Ninja, Shroud, Ranboo, Ludwig, Casimiro, Ironmouse, Gaules and Ibai. In April 2021, Business Insider reported that "over the past 31 days, Ahgren has streamed non-stop in an attempt to break the record of 269,154 subscribers held by gaming personality Tyler 'Ninja' Blevins. By the end of the month-long stream, Ahgren had over 282,000 subscribers on his channel. [...] At one point during his sleep cycle, his channel had the most concurrent viewers of any on the platform". In analysis of the October 2021 data leak, multiple news outlets reported that the three top-earning Twitch content creators are Critical Role ($9,626,712), xQc ($8,454,427), and Summit1g ($5,847,541). Sisi Jiang, for Kotaku, reported that "excluding streams that are run by multiple people (such as Critical Role), there are no women in the top third of top-earning Twitch content creators"; in total, there are only three women in the top 100 and only one is a woman of color. Jiang highlighted that these streamers are "Valorant streamer Pokimane at 39th place, cosplayer Amouranth at 48th, and music streamer Sintica at 71st" and commented that "in spite of the complaints about the 'hot tub meta,' 'titty streamers,' and how some male streamers perceive that female streamers are stealing views from men, the numbers show that only a small percentage of women are among the ranks of Twitch's highest-earning content creators". In August 2021, DrLupo left Twitch for an exclusivity deal with YouTube; TimTheTatman followed in September 2021, as did Ludwig Ahgren in November 2021. Nathan Grayson, for The Washington Post, commented that when streamers moved to Mixer in 2019, Twitch quickly locked down multiple streamers in exclusivity deals; however, streamers who moved to Mixer saw their audiences undergo "a marked downsizing. [...] It demonstrated that many viewers within Twitch's ecosystem, when deprived of their favorite big streamers, will just find other Twitch streamers to take their place. [...] Now Twitch is bargaining from a place of confidence. That allows it to reevaluate previous deals made when streamers had more leverage". Grayson reported that lower offers from Twitch coupled with Twitch's higher streaming hour requirement ("YouTube's contracts start at 100 hours of streaming time per month while Twitch's start at 200") has made YouTube's exclusivity deals "tantalizing" to some Twitch streamers. Grayson wrote that "Ryan Wyatt, head of YouTube Gaming, said that allowing streamers to have a better work–life balance is a big priority for him"; DrLupo cited work–life balance as part of his decision to leave Twitch. Users It was reported in the early 2010s that the typical Twitch viewer is male and aged between 18 and 34 years of age, although the site has also made attempts at pursuing other demographics, including women. By 2015, Twitch had more than 100 million viewers per month. In 2017, Twitch remained the leading live-streaming video service for video games in the US. GeekWire reported that "while Twitch's overall share of the streaming market has been steadily diminishing over the course of the year, from 67.1 percent in December 2018 to 61 percent at the end of the 2019, the steady growth of the overall market means that the overall amount of content watched on the service has done nothing but increase". The journal article World of Streaming. Motivation and Gratification on Twitch reported the results of a Twitch user survey in 2017. In ranking user motivations on the use of Twitch, users were motivated (in descending order) to watch Twitch: "to be entertained", "to follow gaming events", and to "have an alternative for television". Motivations classified as "socialization" and "information" ranked lower than motivations classified as "entertainment". it had 3 million broadcasters monthly and 15 million daily active users, with 1.4 million average concurrent users. Statista, a company specializing in market and consumer data, reported that "as of May 2020, users in their teens and twenties accounted for more than three-quarters of Twitch's active app user accounts in the United States. According to recent data users aged 20 to 29 years, accounted for 40.6 percent of the video streaming app's user base on the Android platform". They also reported that the "distribution of Twitch users in the United States as of 2nd quarter 2021" was 75% male and 25% female. As of 2022, the countries with the most Twitch users were the United States (93 million), Brazil (16.9 million), Germany (16.8 million), France (15.4 million), the United Kingdom (13.4 million), Russia (10.5 million), Spain (10.5 million), Argentina (10 million), Mexico (9.2 million), and Italy (8.3 million users). The United States accounts for roughly 36% of all Twitch users. Twitch allows anyone to watch a live broadcast and does not require viewers to log in. Users also have the option to follow and subscribe (also known as subbing) to streamers. Following is a free option, similar to other platforms such as Instagram and Twitter, where the user will see their followed streamers on the front page of Twitch when signed in and can receive notifications of specific broadcasts. Subscribing is a way for users to financially support streamers in exchange for exclusive benefits determined by the individual streamer. Users who link their Twitch account to their Amazon Prime account gain access to Prime Gaming which includes one complimentary Twitch subscription per month that the user can assign to the streamer of their choice. The aforementioned 2017 academic survey stated that 31.5% of users "spent money on Twitch"; of those users, 22.6% "donated to a streamer", 31.6% subscribed to a streamer and 45.8% "did both". The majority of these users stated the "main motivation is to support a streamer financially". Twitch's Terms of Service does not allow people under 13 years of age to use its services. Additionally, people who are at least 13 years old but below the age of majority in their jurisdiction (18 in most jurisdictions), may only use the services under the supervision or permission of a parent or other legal guardian who agrees to abide to the Terms of Service. Partner and affiliate programs In July 2011, Twitch launched its Partner Program, which reached over 11,000 members by August 2015. Similar to the Partner Program of other video sites like YouTube, the Partner Program allows popular content producers to share in the advertisement revenue generated from their streams. Additionally, Twitch users can subscribe to partnered streamers' channels for US$4.99 a month, often granting the user access to unique emoticons, live chat privileges, and other various perks. Twitch retains US$2.49 of every US$4.99 channel subscription, with the remaining US$2.50 going directly to the partnered streamer. Although exceptions were made, Twitch previously required that prospective partners have an "average concurrent viewership of 500+", as well as a consistent streaming schedule of at least three days a week. However, since the launch of the 'Achievements' feature, there is a clearer "Path to Partnership" with trackable goals for concurrent viewership, duration and frequency of streams. In April 2017, Twitch launched its "Affiliate Program" that allows smaller channels to generate revenue as well, also announcing that it would allow channels access to multi-priced subscription tiers. The participants of this program get some but not all of the benefits of the Twitch Partners. Streamers can make profit from cheering with Bits which are purchasable from Twitch directly. Affiliates are also able to access the Twitch Subscriptions feature, with all the same functionality that Partners have access to, with a maximum of five subscriber emotes. In September 2019, the service announced that Affiliates would now receive a share of ad revenue. Additionally, in June 2023, Twitch introduced the Partner Plus Program. This program was designed to recognize Twitch partners who consistently bring a large audience and engagement to the platform. Streamers in this program receive a 70/30 revenue share on subscription revenue. To qualify for the program, creators had to maintain a sub count of at least 350 subscribers for three consecutive months. Once that is complete, qualifying members will be enrolled for the next 12 months. The program was officially launched on October 1, 2023. This enabled partners to earn more as they continue to grow their community. However, a number of streamers were not happy with the program. Streamers argued that it excluded certain creators because of the criteria and that creators would burn themselves out by trying to achieve 350 monthly subscribers. In January 2024, Twitch made some changes to the program. The platform announced that it would be introducing a new tier to its revenue share program that would grant a 60/40 revenue split and has lower requirements. In addition, the program would also become open to affiliates, expanding access to smaller creators. The program is now known as the "Plus Program." When the program was first launched the year prior, Twitch would only pay out the 70/30 revenue split until streamers made $100,000. Along with expanding the Partner Plus Program and adding a new revenue level, Twitch also announced that it would be eliminating the $100,000 cap for the 70/30 revenue share for all streamers. This was part of Twitch's strategy to provide more earning opportunities for streamers. The program uses a points system to determine which revenue split a streamer qualifies for. Each monthly subscription counts towards the points total. However, some subscriptions have higher point values. One tier 1 subscription is one point, one tier 2 subscription is two points, and one tier 3 subscription equals six points. To qualify for the 60/40 revenue split, streamers must maintain 100 Plus Points for three consecutive months. For the 70/30 revenue split, streamers must maintain 300 Plus Points. Advertising on the site has been handled by a number of partners. In 2011, Twitch had an exclusive deal with Future US. On April 17, 2012, Twitch announced a deal to give CBS Interactive the rights to exclusively sell advertising, promotions and sponsorships for the community. On June 5, 2013, Twitch announced the formation of the Twitch Media Group, a new in-house advertisement sales team which has taken over CBS Interactive's role of selling advertisements. For users who do not have ad-free access to a channel or Twitch Turbo, pre-roll advertising, and mid-roll commercial breaks that are manually triggered by the streamer, are displayed on streams. In September 2020, Twitch announced that it would test automated mid-roll advertising on streams, which cannot be controlled by the streamer. Content moderation and restrictions Copyrighted content On August 6, 2014, Twitch announced that all on-demand videos on Twitch became subject to acoustic fingerprinting using software provided by content protection company Audible Magic; if copyrighted music (particularly, songs played by users from outside of the game they are playing) is detected, the 30-minute portion of the video which contains the music will be muted. Live broadcasts were not subject to these filters. A system was available for those who believed they were inappropriately affected and had rights to the music they used to challenge the filtering. Twitch offered a selection of royalty-free music for streamers to use, which was expanded upon later in January 2015. The audio filtering system, along with the lack of communication surrounding the changes in general, proved to be controversial among users. In a Reddit AMA, co-founder Emmett Shear admitted that his staff had "screwed up" and should have provided advance warning of the changes, and promised that Twitch had "absolutely no intention" of implementing audio filtering on live broadcasts. In June 2020, Twitch received a large wave of DMCA takedown notices aimed at year-old VODs and "clips" (short segments of streams that can be captured by users) that contain copyrighted music from 2017 to 2019. Twitch complied with the takedowns and also issued a number of copyright strikes against viewers. Concerned streamers were notified that they should remove all VODs and clips if not certain they did not contain copyrighted material. This provoked major backlash, both at the loss of prior content but also based on viability concerns due to an inability to review or even rapidly delete content. There were also complaints based that strikes were being issued on viewer-created clips, even where the streamer-created content was deleted. On September 15, 2020, Twitch signed a licensing agreement with the French performance society SACEM, allowing composers and publishers to collect royalties whenever their music is streamed in France. Twitch already had licensing deals with the American societies ASCAP, BMI, SESAC and Global Music Rights. To address these issues and also build upon the growth of music-based content on Twitch, Twitch introduced an extension known as "Soundtrack" in September 2020, which plays rights-cleared music with curated genre-based playlists. It is contained within a separate audio track that is not recorded with VODs, and had agreements with 24 music distributors and independent record labels at launch. A group of US performance rights and music associations accused Twitch of designing Soundtrack in such a way as to avoid payment of mechanical and synchronization licenses—claims which Twitch has defended. In September 2021, Twitch and the National Music Publishers' Association signed a creative partnership. In September 2024, Twitch introduced a new program for DJs on the platform, allowing them access to a new stream category for DJ mixes where music from participating labels may be streamed without the risk of a DMCA takedown, with a share of revenue being used to pay royalties (for affiliates and partners, this is deducted from the channel's existing split). This program has limitations: features such as VODs and clips are disabled on any channel enrolled in the DJ program, even if streaming non-DJ content (with Twitch officially recommending the use of a separate account for non-DJ streams), and DJs are restricted to playing music from certain labels and publishers that have reached agreements with the service (potentially restricting the ability to play unofficial remixes). Mature content Twitch users are not allowed to stream any game that is rated "Adults Only" (AO) in the United States by the Entertainment Software Rating Board (ESRB), regardless of its rating in any other geographical region, and any game that contains "overtly sexual content" or "gratuitous violence", or content which violates the terms of use of third-party services. Twitch has also explicitly banned specific games from streaming, regardless of rating; this includes games such as BMX XXX, eroge visual novel games (such as Dramatical Murder), HuniePop, Rinse and Repeat, Second Life, and Yandere Simulator. The banning of Yandere Simulator was criticized by YandereDev, the developer of the game. He believed that the game was being arbitrarily singled out with no explanation, as Twitch has not banned other games with similarly excessive sexual or violent content such as Mortal Kombat X, Grand Theft Auto, or The Witcher 3. Twitch took temporary action in May 2019 after channels related to the video game Artifact began to be used for inappropriate content. Artifact, a major game by Valve, had lost most of its audience in just months from its release, and by late May 2019, several popular livestreamers commented that the total viewership for Artifact streams had dropped to near zero. In the days that followed, several streamers started to make streams purporting to be Artifact gameplay but was trolling or other off-topic content. Initially these new streams were playing with the viewers or were jokes, such as showing animal videos or League of Legends matches. After a few days, other Artifact channel streams appeared containing content that was against the terms of Twitch's use policy, including full copyrighted movies, pornography, Nazi propaganda, and at least one stream that showed the entirety of the shooter's video from the Christchurch mosque shootings. The titles of such streams were usually presented to imply they were showing other content while waiting in queue for Artifact matches as to appear legitimate. As word of these streams in the Artifact section grew, Twitch took action, deleting the account that streamed the Christchurch shooting. Twitch then took steps to temporarily ban new accounts from streaming until they can resolve the issue. By June 2019, Twitch started taking legal actions against one hundred "John Doe" streamers in a California court, accusing them of trademark infringement, breach of contract, fraud, and unlawful use of the service that was harming and scaring away users of the service. In early 2021, some streams began to use their Twitch channel to broadcast themselves from hot tubs while wearing swimsuits. Twitch considered these streams to be "not advertiser friendly", banning some of the more predominant channels that had taken this route. In May 2021, Twitch clarified in a "Pools, Hot Tubs, and Beaches" post that it was not trying to discriminate against women or others through this action, but through content that they deemed to be "sexually suggestive". In June that year, Twitch also took similar action against users that performed yoga while at the same time made autonomous sensory meridian response (ASMR) sounds via their microphones, which Twitch also stated was approaching sexual content. In December 2023, shortly after instating a new policy that allowed specific instances of adult content, Twitch reversed its decision to allow depictions of "fictionalized nudity" following backlash from users and streamers. Twitch CEO Dan Clancy acknowledged that the new policy was a step too far, and that distinguishing between digital art and photography was challenging. As a result, Twitch will no longer permit depictions of real or fictional nudity, regardless of the medium. The company also removed content that violated the updated policy and issued channel enforcements. While some changes to Twitch's Sexual Content Policy remain in effect, such as allowing content that highlights certain body parts and specific dances without a label, games featuring nudity or sexual violence as a core focus are still prohibited. Hate speech and harassment In February 2018, Twitch updated its acceptable content policies to direct that any content that it deemed hateful be suspended from its platform. In June 2020, a number of women stepped forward with accusations towards several streamers on Twitch and other services related to sexual misconduct and harassment claims. Twitch stated it would review all reported incidents and comply with law enforcement in any investigative efforts. However, several popular streamers on Twitch's service believed that the platform could do more to evaluate the accused individuals, prevent incidents, and protect others in the future, and used June 24, 2020, as a Twitch blackout day, not streaming any content through Twitch to show their support. By the evening of June 24, 2020, Twitch had placed several bans on the accounts of those accused after completing their investigation, and stated in a blog post they would be forwarding additional details to law enforcement. Twitch temporarily suspended an account belonging to then US President Donald Trump's campaign on June 29, 2020. Twitch stated that "hateful conduct is not allowed" as the reason for the suspension. Twitch announced a new policy towards harassment and hateful content in December 2020 that would take effect on January 22, 2021, aimed to better protect marginalized users of the service. While the new policy is more strict, Twitch said that this also includes a larger sliding scale of remedies or punishments to better deal with edge cases, such as temporarily blocking one's channel for a short time rather than a full ban. The new rules include a ban on imagery containing the Confederate Flag, and 'racist emotes', though the list of such emotes has not been clarified yet. The new policy included banning words that were considered sexual insults, such as "incel" and "virgin" when used for harassment. The banned words included "simp", which raised criticism by streamers and long-time viewers. While its slang origins have defined "simp" derogatorily as "a man who invests a lot of time and energy into women who don't want him", the term had become common on Twitch as an insult related to men being nice to women on the service or simply to refer to a person with loyalty to another. Twitch, in response, clarified that penalties for using these terms would only be enforced if they were being used in harassment of other users. On December 4, 2020, Twitch removed the "blind playthrough" tag due to concerns of ableism that it may be offensive to those who are visually impaired. Suggestions for non-offensive and more neutral labels include "first playthrough", "undiscovered," and "no spoilers." A popular feature of Twitch is the ability to "raid" another channel, where multiple users, coordinated from a different Twitch channel or another social media service, all join a target channel to provide support and encouragement. This is typically used to help boost the popularity of the target channel, particularly if the raid is organized by a popular streamer. Twitch had officially supported this type of activity since 2017 with the ability for a streamer to send all of their viewers to another channel as a raid. However, around mid-2021, new types of "hate raids" began to occur with increasing frequency on Twitch. In these cases, numerous users would flood a channel and its chat with messages of harassment and hate towards the streamer as a form of cyberbullying. Most of these users are typically from automated bots, which made it difficult for channel moderators to deal with the amount of messages. Despite warning Twitch about these hate raids, Twitch had shown little action towards stopping them, leading numerous streams to organize a "#ADayOffTwitch" on September 1, 2021, as a form of protest in anticipation that Twitch will find ways to take action against them. After acknowledging a problem with dealing with hate raids, Twitch launched a lawsuit in early September 2021 against two individuals they had determined to be responsible for managing several hate raids after permanently banning their accounts. At the end of September 2021, Twitch introduced tools for streamers to be able to limit who may participate in their chat as to prevent hate raids. These tools allow streamers to limit chat to those that have verified their phone number or email with Twitch, as well as to those that have followed their channel for a minimum amount of time. In May 2022, following the mass shooting in Buffalo, the New York state attorney general's office announced an investigation into multiple online platforms, including Twitch, to determine their part in platforming or promoting hateful content. The investigation will also focus on the platforms' moderation efforts. A spokesperson for Twitch stated that the service would comply with the investigation. In November 2024, Twitch banned "Zionist" as a slur. Twitch specified that the rule was conditional. "You’re allowed to discuss the political movement of that name, but not “attack or demean another individual or group of people on the basis of their background or religious belief.” Twitch said. This move was condemned by pro-Palestinian activists; Omar Zahzah, writing in the Palestine Chronicle, said it would ultimately lead to the ban of Palestine from the platform. Extremism In April 2021, it was reported that Twitch was providing a financial lifeline to extremists such as QAnon adherents and far-right influencers. Another report in August and September 2021, by the Institute of Strategic Dialogue also identified Twitch as a platform where far right extremists run rampant. Clips from far-right extremists, such as 'Omegle Redpilling' become quite popular before they are removed from the platform and in some cases, are not removed at all according to ISD. In the wake of the Hamas-led attack on Israel on October 7, 2023, Twitch restricted creation of new accounts with confirmation from Israel and Palestine. According to a public statement made by Twitch, this was done to prevent streaming "graphic material related to the attack and to protect the safety of users", then had inadvertently forgot to remove this ban in the following months. After some complaints that it was engaging in antisemitism, Twitch reverted the block in October 2024, apologizing for their oversight in removing the block and stating it was a "unacceptable miss". Gambling On September 20, 2022, Twitch announced that beginning October 18, it would prohibit the streaming of slots, roulette and dice games on gambling websites not licensed in the US or "other jurisdictions that provide sufficient consumer protection". The policy change does not affect sports betting, fantasy sports, and poker. Gambling has been extremely popular on Twitch for years, being one of the most popular types of content on the platform, with many streamers being sponsored by online gambling services; however, it has also been controversial, with prominent streamers such as Mizkif and Pokimane speaking out against the negative effects of gambling streams. The announcement came shortly after a popular streamer admitted he spent $200,000 in donations on CS:GO skin gambling. Internet censorship the Twitch website is blocked and the app is blocked from the Apple App Store in China. In India, Twitch was reportedly blocked by Reliance-owned telecommunication company Jio as well as internet service providers JioFiber and Hathway in September 2020 as some users were illegally streaming Indian Premier League cricket matches on the platform. Slovakia's government reportedly blocked Twitch in June 2021 after a streamer in the country with around 35,000 followers was found streaming poker, which was in violation of local gambling laws. On July 4, 2022, the Iranian government blocked access to Twitch for Iranian Internet users. On February 23, 2024, Twitch was blocked in Turkey per a complaint by the country's lotteries commission. Service was restored in the country six days later on February 29. Platform support Twitch CEO Emmett Shear has stated a desire to support a wide variety of platforms, stating that they wanted to be on "every platform where people watch video". Users can watch Twitch streams via Twitch's website in a web browser and via dedicated streaming apps for mobile devices, digital media players and video game consoles. This includes: Twitch's mobile apps for Android, Fire OS, and iOS Twitch's web-based TV and game console apps for Fire TV, webOS (LG TVs), Samsung TVs (2020 and newer), Android TV, Apple TV, PlayStation 4, PlayStation 5, Xbox One, and Xbox Series X/S The Twitch web-apps for Xbox and PlayStation consoles received an update that replaced the app with the "viewer-only" web-app for TVs, removing several features, including chatting. The Twitch Desktop App for Windows and macOS is no longer supported. Twitch's web-based TV and game console apps for PlayStation 3, Xbox 360, Nintendo Switch, and pre-2021 (Tizen-based) Samsung TVs are no longer supported Users can broadcast to Twitch from the following platforms: Twitch's mobile apps for Android, Fire OS, and iOS The free and open-source OBS Studio app for Windows, macOS and Linux Twitch Studio app for Windows and macOS Native integration on PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, NVIDIA Shield TV Pro, and NVIDIA Shield tablets Prior versions of Twitch's app for Xbox One and Series X/S had broadcasting functionality before being replaced by "viewer-only" wrapped web-apps for a single codebase across TVs on November 16, 2022. Twitch said they worked with Microsoft so those consoles could still broadcast to Twitch, though not through the Twitch app, but with native integration. Twitch released a software development kit for third-party developers integrate Twitch broadcasting into their software. In-app integration in third-party apps for desktop like OBS, Streamlabs Desktop, Lightstream Studio, Melon, Split Broadcaster, Gamecaster, and NVIDIA GeForce Experience. Broadcasters can also use third party apps for mobile like Omlet Arcade and the Streamlabs App In-game integration such as Eve Online, PlanetSide 2, some Call of Duty games, Minecraft and War Thunder In-app integrations in EA's Origin and Ubisoft's Uplay are no longer supported. Twitch Desktop App and CurseForge After acquiring Curse LLC, Twitch Interactive rebranded the Curse app as the Twitch Desktop App in March 2017. It kept features for mod installation and management for supported games via CurseForge; kept Curse Voice features such as screen sharing, text chat, voice chat, video chat and community server creation; added a dedicated browser for the Twitch website; added Twitch's friends system; and added activity sharing. This update also redesigned the application. The software also served as the client for the former Twitch Game Store. The Curse mobile app was subsequently rebranded as Twitch Messenger which was later shut down. Twitch removed the app's servers, group messaging, voice chat and video chat functionality in February 2019. On June 22, 2020, Twitch Interactive sold CurseForge to Overwolf for an undisclosed sum. On December 2, 2020, mod management functionality was removed from the Twitch Desktop App. The mod management functionality previously found in the Twitch app can since be found in Overwolf's CurseForge app. On March 30, 2022, Twitch announced that it would officially end support for the Twitch Desktop App on April 30, 2022, opting users of the desktop app to use a web browser to interact with Twitch on desktop platforms. Creator Dashboard The Creator Dashboard on Twitch is a tool that helps streamers manage and optimize their channels. In 2019, the platform announced a new set of features to make streaming more accessible and interactive. This new Creator Dashboard introduced features such as Stream Manager, Quick Actions, Creator Updates, and Assistant. These features were introduced so that creators could set subscriber goals, analyze engagement trends, and simplify streaming tasks. The Stream Manager allows streamers to view various aspects of their livestream such as their live chat, recent followers and subscribers, and a playback of the stream to see what the viewers are seeing. The Quick Actions panel on the Creator Dashboard allows streamers to perform actions such as running ads, enabling followers-only or emote-only chat, and creating clips. The Creator Updates section is a dedicated space for streamers to learn about important product updates and feature changes. The Assistant section provides creators with resources to help them grow their channel and become an affiliate or partner. TwitchCon TwitchCon is a biannual fan convention devoted to Twitch and the culture of video game streaming. The inaugural event was held at the Moscone Center in San Francisco from September 25 to 26, 2015. Since its inception TwitchCon has been an annual event. The second TwitchCon was held in San Diego at the San Diego Convention Center from September 30 to October 2, 2016. The third annual TwitchCon was held in Long Beach at the Long Beach Convention and Entertainment Center from October 20 to 22, 2017. The fourth annual TwitchCon was held at the San Jose McEnery Convention Center in San Jose, California, from October 26 to 28, 2018. In 2019, Twitchcon expanded overseas and hosted their first ever European event in Berlin in April 2019, alongside a North American event later in November 2019 in San Diego. TwitchCon had planned to host an event in Amsterdam in May 2020, but this was cancelled due to the COVID-19 pandemic. Another TwitchCon event was planned in San Diego in September 2020, but was also cancelled due to COVID-19.
Technology
Social network and blogging
null
41800775
https://en.wikipedia.org/wiki/Agricultural%20geography
Agricultural geography
Agricultural geography is a sub-discipline of human geography concerned with the spatial relationships found between agriculture and humans. That is, the study of the phenomena and effects that lead to the formation of the earth's top surface, in different regions. History Humans have been interacting with their surroundings since as early as man has been around. According to article "How Does an Agricultural Region Originate?" English settlers who landed on American soil hundred of years ago greatly shaped American agriculture when they learned how to plant and grow crops from the Natives. Settlers continue to change the landscape by the demolishing wooded areas and turning them into pasteurized fields. Focus It is traditionally considered the branch of economic geography that investigates those parts of the Earth's surface that are transformed by humans through primary sector activities for consumption. It thus focuses on the different types of structures of agricultural landscapes and asks for the cultural, social, economic, political, and environmental processes that lead to these spatial patterns. While most research in this area concentrates rather on production than on consumption, a distinction can be made between nomothetic (e.g. distribution of spatial agricultural patterns and processes) and idiographic research (e.g. human-environment interaction and the shaping of agricultural landscapes). The latter approach of agricultural geography is often applied within regional geography. Events The war in Bosnia-Herzegovina from 1992 to 1995 affected a large majority of the country farming land due to the large number of land mines (approximately 1 million) that were planted and never were recovered or detonated. These areas with the landmines have become abandoned for obvious safety reasons. Much of the area where the landmines were planted was farming land, now residents of this country have to find another way to grow the crops they once planted there. Research Studies A research study was done in Uganda where the researchers selected four completely different types of environmental factors and those factors were: rain-forest with no animal interaction, rain-forest animal and human interaction, urban living, and rain-forest with animal interaction. After running several analyzing test using the top soil and rain water it was determined that the urban living areas had higher levels of nitrogen, calcium and pH levels.
Technology
Academic disciplines
null
30994374
https://en.wikipedia.org/wiki/Loach
Loach
Loaches are ray-finned fish of the suborder Cobitoidei. They are freshwater, benthic (bottom-dwelling) fish found in rivers and creeks throughout Eurasia and northern Africa. Loaches are among the most diverse groups of fish; the 1249 known species of Cobitoidei comprise about 107 genera divided among 9 families. Etymology The name Cobitoidei comes from the type genus, Cobitis, described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae. However, its origin predates modern zoological nomenclature and derives from a term used by Aristotle to refer to "small fishes that bury... like the gudgeon." Description Loaches display a wide variety of morphologies, making the group difficult to characterize as a whole using external traits. They range in adult length from the 23 mm (1 in) miniature eel-loach, Pangio longimanus, to the 50 cm (20 in) imperial flower loach, Leptobotia elongata, with the latter weighing up to 3 kg (6.6 lbs). Most loaches are small, narrow-bodied and elongate, with minute cycloid scales that are often embedded under the skin, patterns of brown-to-black pigment along the dorsal surface and sides, and three or more pairs of whisker-like barbels at the mouth. The type species of the family Cobitidae, Cobitis taenia, has a body shape and pigment pattern typical of Cobitoidei. However, many loaches are eel-like or conversely, quite stout-bodied; some balitorids have large, visible scales. Loaches in the families Cobitidae, Botiidae, and Serpenticobitidae possess a bifid, protrusible spine below the eye, or in the case of the genus Acantopsis, between the eye and the tip of the snout. Taxonomy Classification Cobitoidei is a suborder within the order Cypriniformes, one of the most diverse groups of vertebrates. The order is commonly known as "minnows, carps, loaches, and relatives," and has included the suckers (Catostomidae) and algae eaters (Gyrinocheilidae), these are now regarded as separate suborders, the Gyrinocheiloidei and the Catostomoidei. Members of the latter family, which contains only a single genus Gyrinocheilus, are sometimes referred to as sucking loaches. It is uncertain if Gyrinocheilidae, or a clade containing both Gyrinocheilidae and Catostomidae, is sister to Cobitoidei. Eschmeyer's Catalog of Fishes classifies the families in the suborder as follows: Suborder Cobitoidei Fitzinger, 1832 Family Botiidae Berg, 1940 (pointface loaches) Family Vaillantellidae Nalbant & Bănărescu, 1977 (longfin loaches) Family Cobitidae Swainson, 1838 (spined loaches) Family Barbuccidae Kottelat, 2012 (scooter loaches) Family Gastromyzontidae Fowler, 1905 (hillstream loaches) Family Serpenticobitidae Kottelat, 2012 (snake loaches) Family Balitoridae Swainson, 1839 (river loaches) Family Ellopostomatidae Bohlen & Šlechtová, 2009 (square-head loaches) Family Nemacheilidae Regan, 1911 (brook loaches) History of classification At the turn of the 20th century only two families of loaches had been described, and of these only Cobitidae was widely recognized by taxonomists. In the early 1900s, the American ichthyologist Fowler and the Indian ichthyologist Hora recognized what would come to be known as Balitoridae and Gastromyzontidae. Nemachelidae, and later Botiidae, were described as subfamilies of Cobitidae until their elevation to family status in 2002. Owing to shared morphological characteristics (see osteology, below) the relationship of the botiid and cobitid loaches was particularly difficult to resolve until the advent of molecular phylogenetics. Three of the nine families, containing only two or three species apiece, were recognized within the last ten years. Phylogeny Reproduction of molecular phylogeny of Cobitoidea from Bohlen & Šlechtová, 2009, with common names following Eschmeyer's Catalog of Fishes. Osteology Among loaches, the majority of known morphological synapomorphies (shared characters derived from a common ancestor) are osteological. In particular, modifications to the ethmoid and surrounding bones within the neurocranium unite Cobitoidei, in addition to certain lateral-line canal ossifications. An erectile suborbital spine, a modification of the lateral ethmoid, was formerly thought to represent a synapomorphy between Cobitidae and Botiidae. It is now considered a plei siomorphy of Cobitoidei, a character shared by the common ancestor but lost in most loach lineages. The suborbital spine is also retained in the serpent loaches, Serpenticobitidae. Habitat and distribution Loaches are found in a wide variety of habitats throughout Europe, northern Africa, and central and Southeast Asia. Most families occur predominantly in rocky mountain streams at high elevations, but almost all have lowland representatives as well. Many species of Cobitidae burrow in the sand and inhabit riverbeds in broad, flat terrain. At least three families contain blind, troglomorphic species adapted to life in caves. Relationship with humans Some loaches are important food fish, especially in East and Southeast Asia where they are a common sight in markets. Loaches are popular in the aquarium trade. Loaches are fed sinking discs designed for them in the aquarium. Some of the most well-known examples are the clown loach (Chromobotia macracanthus), the kuhli loach (Pangio kuhlii), and the dwarf chain loach (Ambastaia sidthimunki). Botiid and gastromyzontid loaches also occasionally make their way into the trade. Although loaches have a strictly Old World native distribution, the oriental weatherfish, Misgurnus anguillicaudatus, (also known as the dojo loach) has been introduced in parts of the United States.
Biology and health sciences
Cypriniformes
Animals
24919513
https://en.wikipedia.org/wiki/Hectare
Hectare
The hectare (; SI symbol: ha) is a non-SI metric unit of area equal to a square with 100-metre sides (1 hm2), that is, 10,000 square metres (10,000 m2), and is primarily used in the measurement of land. There are 100 hectares in one square kilometre. An acre is about and one hectare contains about . In 1795, when the metric system was introduced, the are was defined as 100 square metres, or one square decametre, and the hectare ("hecto-" + "are") was thus 100 ares or  km2 (10,000 square metres). When the metric system was further rationalised in 1960, resulting in the International System of Units (), the are was not included as a recognised unit. The hectare, however, remains as a non-SI unit accepted for use with the SI and whose use is "expected to continue indefinitely". Though the dekare/decare daa (1,000 m2) and are (100 m2) are not officially "accepted for use", they are still used in some contexts. Description The hectare (), although not a unit of SI, is the only named unit of area that is accepted for use with SI units. The name was coined in French, from the Latin . In practice the hectare is fully derived from the SI, being equivalent to a square hectometre. It is widely used throughout the world for the measurement of large areas of land, and it is the legal unit of measure in domains concerned with land ownership, planning, and management, including law (land deeds), agriculture, forestry, and town planning throughout the European Union, New Zealand and Australia (since 1970). However, the United Kingdom, the United States, Myanmar (Burma), and to some extent Canada, use the acre instead of the hectare for measuring surface or land area. Some countries that underwent a general conversion from traditional measurements to metric measurements (e.g. Canada) required a resurvey when units of measure in legal descriptions relating to land were converted to metric units. Others, such as South Africa, published conversion factors which were to be used particularly "when preparing consolidation diagrams by compilation". In many countries, metrification redefined or clarified existing measures in terms of metric units. The following legacy units of area have been redefined as being equal to one hectare: Jerib () in Iran Djerib () in Turkey Gongqing () in China Manzana in Argentina Bunder in the Netherlands (until 1937) In Mexico, land area measurements are commonly given as combinations of hectares, ares, and centiares. These are commonly written separated by a dash; for example, 1-21-00.26 ha would mean 1 hectare, 21 ares, and 0.26 centiares (12,100.26 m2). History The metric system of measurement was first given a legal basis in 1795 by the French Revolutionary government. The law of 18 Germinal, Year III (7 April 1795) defined five units of measure: The :metre for length The are (100 m2) for area [of land] The stère (1 m3) for volume of stacked firewood The :litre (1 dm3) for volumes of liquid The gram for mass In 1960, when the metric system was updated as the International System of Units (SI), the are did not receive international recognition. The International Committee for Weights and Measures () makes no mention of the are in the 2019 edition of the SI brochure, but classifies the hectare as a "Non-SI unit accepted for use with the International System of Units". In 1972, the European Economic Community (EEC) passed directive 71/354/EEC, which catalogued the units of measure that might be used within the Community. The units that were catalogued replicated the recommendations of the CGPM, supplemented by a few other units including the are (and implicitly the hectare) whose use was limited to the measurement of land. Unit family The names centiare, deciare, decare and hectare are derived by adding the standard metric prefixes to the original base unit of area, the are. Decimilliare The decimilliare (dma, sometimes seen in cadastre area evaluation of real estate plots) is are or one square decimetre. Such usage of a double prefix is non-standard. The decimilliare is (100 mm)2 or roughly a four-inch-by-four-inch square. Centiare The centiare is one square metre. Deciare The deciare (rarely used) is ten square metres. Are The are ( or ) is a unit of area, equal to 100 square metres (), used for measuring land area. It was defined by older forms of the metric system, but is now outside the modern International System of Units (SI). It is still commonly used in speech to measure real estate, in particular in Indonesia, India, and in various European countries. In Russian and some other languages of the former Soviet Union, the are is called (: 'a hundred', i.e. 100 m2 or hectare). It is used to describe the size of suburban dacha or allotment garden plots or small city parks where the hectare would be too large. Many Russian dachas are 6 ares in size (in Russian, ). Decare The decare or dekare () is derived from deca and are, and is equal to 10 ares or 1000 square metres. It is used in Norway and in the former Ottoman areas of the Middle East and Bulgaria as a measure of land area. The names of the older land measures of similar size are usually used, redefined as exactly one decare: in Greece in the Balkans, Israel, Palestine, Jordan, Lebanon, Syria, and Turkey in Norway. Conversions The most commonly used units are in bold. One hectare is also equivalent to: 1 square hectometre 1.008 (Japan) 2.381 (Egypt) 6.25 (Thailand) 10 or (Middle East) 10 (Greece) 15 or 0.15 Unicode The Unicode character , in the CJK Compatibility block, is intended for compatibility with pre-existing East Asian character codes. It is not intended for use in alphabetic contexts. is a combination of (), the Japanese translation of "hectare".
Physical sciences
Area
null
37615379
https://en.wikipedia.org/wiki/Wild%20yak
Wild yak
The wild yak (Bos mutus) is a large, wild bovine native to the Himalayas. It is the ancestor of the domestic yak (Bos grunniens). Taxonomy The ancestor of the wild and domestic yak is thought to have diverged from Bos primigenius at a point between one and five million years ago. The wild yak is now normally treated as a separate species from the domestic yak (Bos grunniens). Based on genomic evidence, the closest relatives of yaks are considered to be bison, which have historically been considered members of their own titular genus, rendering the genus Bos paraphyletic. Relationships of members of the genus Bos based on nuclear genomes after Sinding, et al. 2021. Description The wild yak is among the largest extant bovid species. Adults stand about tall at the shoulder, and weigh . The head and body length is , not counting the tail of . The females are about one-third the weight and are about 30% smaller in their linear dimensions when compared to bull wild yaks. Domesticated yaks are somewhat smaller. They are heavily built animals with a bulky frame, sturdy legs, and rounded cloven hooves. To protect against the cold, the udder in females and the scrotum in males are small, and covered in a layer of hair. Females have four teats. Both sexes have long shaggy hair, with a dense woolly undercoat over the chest, flanks, and thighs for insulation against the cold. In males especially, this undercoat may form a long "skirt" that can reach the ground. The tail is long and horse-like, rather than tufted like the tails of cattle or bison. The coat is typically black or dark brown, covering most of the body, with a grey muzzle (although some wild golden-brown individuals have been reported). Wild yaks with gold coloured hair are known as the wild golden yak (). They are considered an endangered subspecies in China, with an estimated population of 170 left in the wild. Two morphological types have been identified, so-called Qilian and Kunlun. Distribution and habitat Wild yaks once ranged up to southern Siberia to the east of Lake Baikal, with fossil remains of them being recovered from Denisova Cave, but became extinct in Russia around the 17th century. Today, wild yaks are found primarily in northern Tibet and western Qinghai, with some populations extending into the southernmost parts of Xinjiang, and into Ladakh in India. Small, isolated populations of wild yak are also found farther afield, primarily in western Tibet and eastern Qinghai. In historic times, wild yaks were also found in Bhutan, but they are now considered extinct there. The primary habitat of wild yaks consists of treeless uplands between , dominated by mountains and plateaus. They are most commonly found in alpine tundra with a relatively thick carpet of grasses and sedges rather than the more barren steppe country. The wild yak was thought to be regionally extinct in Nepal in the 1970s, but was rediscovered in Humla in 2014. This discovery later made the species to be painted on Nepal's currency. Behaviour and ecology The diet of wild yaks consists largely of grasses and sedges, such as Carex, Stipa, and Kobresia. They also eat a smaller amount of herbs, winterfat shrubs, and mosses, and have even been reported to eat lichen. Historically, the main natural predator of the wild yak has been the Himalayan wolf, but Himalayan black bears, Himalayan brown bears and snow leopards have also been reported as predators in some areas, likely of young or infirm wild yaks. Thubten Jigme Norbu, the elder brother of the 14th Dalai Lama, reported on his journey from Kumbum in Amdo to Lhasa in 1950: Wild yaks are herd animals. Herds can contain several hundred individuals, although many are much smaller. Herds consist primarily of females and their young, with a smaller number of adult males. On average female yaks graze 100m higher than males. Females with young tend to choose grazing ground on high, steep slopes. The remaining males are either solitary, or found in much smaller groups, averaging around six individuals. Groups move into lower altitude ranges during the winter. Although wild yaks can become aggressive when defending young, or during the rut, they generally avoid humans, and may flee for great distances if approached. Reproduction Wild yaks mate in summer and give birth to a single calf the following spring. Females typically only give birth every other year. Conservation The wild yak is currently listed as Vulnerable on the IUCN Red List. It was previously classified as Endangered, but was downlisted in 1996 based on the estimated rate of population decline and current population sizes. The latest assessment in 2008 suggested a total population of no more than 10,000 mature individuals. The wild yak is experiencing threats applied by several sources. Poaching, including commercial poaching, has remained the most serious threat; males are particularly affected because of their more solitary habits. Disturbance by and interbreeding with livestock herds is also common. This may include the transmission of cattle-borne diseases, although no direct evidence of this has yet been found. Conflicts with herders themselves, as in preventive and retaliatory killings for abduction of domestic yaks by wild herds, also occur but appear to be relatively rare. Recent protection from poaching particularly appears to have stabilized or even increased population sizes in several areas, leading to the IUCN downlisting in 2008. In both China and India, the species is officially protected; in China it is present in a number of large nature reserves. Impact on humans The wild yak is a reservoir for zoonotic diseases of both bacterial and viral origins. Such bacterial diseases include anthrax, botulism, tetanus, and tuberculosis.
Biology and health sciences
Bovidae
Animals
36194985
https://en.wikipedia.org/wiki/Julida
Julida
Julida is an order of millipedes. Members are mostly small and cylindrical, typically ranging from in length. Eyes may be present or absent, and in mature males of many species, the first pair of legs is modified into hook-like structures. Additionally, both pairs of legs on the 7th body segment of males are modified into gonopods. Distribution Julida contains predominantly temperate species ranging from North America to Panama, Europe, Asia north of the Himalayas, Asir region, Saudi Arabia, and Southeast Asia. Classification The order Julida contains approximately 750 species, divided into the following superfamilies and families: Blaniuloidea C. L. Koch, 1847 Blaniulidae C. L. Koch, 1847 Galliobatidae Brolemann, 1921 Okeanobatidae Verhoeff, 1942 Zosteractinidae Loomis, 1943 Juloidea Leach, 1814 Julidae Leach, 1814 Rhopaloiulidae Attems, 1926 Trichoblaniulidae Verhoeff, 1911 Trichonemasomatidae Enghoff, 1991 Nemasomatoidea Bollman, 1893 Chelojulidae Enghoff, 1991 Nemasomatidae Bollman, 1893 Pseudonemasomatidae Enghoff, 1991 Telsonemasomatidae Enghoff, 1991 Paeromopodoidea Cook, 1895 Aprosphylosomatidae Hoffman, 1961 Paeromopodidae Cook, 1895 Parajuloidea Bollman, 1893 Mongoliulidae Pocock, 1903 Parajulidae Bollman, 1893
Biology and health sciences
Myriapoda
Animals
29589389
https://en.wikipedia.org/wiki/Tityus%20serrulatus
Tityus serrulatus
Tityus serrulatus, the Brazilian yellow scorpion, is a species of scorpion of the family Buthidae. It is native to Brazil, and its venom is extremely toxic. It is the most dangerous scorpion in South America and is responsible for the most fatal cases. Description Adult specimens typically measure between 5–7 cm (2–3 in) in length. As suggested by its common name, coloration consists of pale-yellow legs (8 in total) and pedipalps, with a darker shade of yellowish brown on the trunk, fingers, and tip of the tail. Like other members of the family Buthidae, T. serrulatus has a bulbous tail, often carried in a characteristic forward curve over the back, which is segmented, with prominent ridges and serrations. The tail is tipped with a venom-injecting barb capable of immobilizing prey or delivering defensive strikes. Geographic range The species is endemic to Brazil and widely found throughout the country, including the states of Alagoas, Bahia, Ceará, Espírito Santo, Goiás, Mato Grosso, Mato Grosso do Sul, Minas Gerais, Paraná, Pernambuco, Rio de Janeiro, Rio Grande do Norte, Rio Grande do Sul, Rondônia, Santa Catarina, São Paulo, Sergipe, and Distrito Federal. "Due to deforestation and growing urbanization, this species is becoming more and more present," according to Rogério Bertani in an interview with the British newspaper The Guardian. He is a scientist and scorpion specialist at the Butantan Institute in São Paulo. "I personally think that the problem will continue to grow." By 2018 there was a notable increase in the number of T. serrulatus scorpions living in the urban spaces of São Paulo, contributing to an increase in reported scorpion stings in Brazil from 12,000 in 2000 to 140,000 by 2018. An abundance of prey, notably cockroaches, and shelter along with a lack of predators is believed to be a cause of the increase in scorpion numbers in Brazilian cities. Feeding It has a diet of insects, such as cockroaches, and is suited to life in sewers and trash heaps in urban areas. Having a low metabolic rate, it can survive for months without eating. Reproduction The species is usually parthenogenetic. Venom Potency In Brazil, scorpions are credited with causing the highest incidence of human envenomations of all venomous animals. They cause more than all other venomous animals, including snakes and spiders, combined. With mortality rates ranging from 1.0 to 2.0% among children and elderly persons, T. serrulatus is responsible for more medically significant accidents than any other scorpion in the country. Most stings occur in urban areas, inside or near homes, with greater frequency in the south and southeast during the warm and rainy months, but with little or no seasonal variability in the north, northeast, and center-west. Effects In mild cases, localized pain is the primary symptom. Tityus serrulatus venom contains TsIV, which slows the inactivation of sodium channels in muscles and nerve cells. Tityus serrulatus has an excitatory neurotoxin that attacks the autonomic nervous system, causing the release of adrenaline, noradrenaline and acetylcholine, causing an immense variety of symptoms in the victims; clinical effects may include hyperglycemia, fever, priapism, agitation, hypersalivation, tachycardia, hypertension, mydriasis, sweating, hyperthermia, tremors, gastrointestinal complications (diarrhea, abdominal pain, nausea, vomiting) and pancreatitis. Convulsions and coma are relatively rare, but can occur. Death usually results from pulmonary edema and cardiorespiratory failure. Deaths can occur between 1–6 hours, or 12–14 hours, depending on the age group, the person's state of health and the quantity of injected venom. The venom of this species seems to have different lethalities according to its distribution, T. serrulatus from Distrito Federal has an LD50 of 51.6 μg/kg, compared to LD50 from T. serrulatus from Minas Gerais, 26 μg/kg. According to a nationwide epidemiological study of scorpion accidents that was conducted from 2000 to 2012, there were 482,616 accidents and 728 deaths reported in Brazil during that period. All of the fatal cases were attributed to the genus Tityus, and T. serrulatus, in particular, was believed to be responsible for the vast majority of scorpion-related deaths considered by the study.
Biology and health sciences
Scorpions
Animals
23453065
https://en.wikipedia.org/wiki/D%C3%BCrer%20graph
Dürer graph
In the mathematical field of graph theory, the Dürer graph is an undirected graph with 12 vertices and 18 edges. It is named after Albrecht Dürer, whose 1514 engraving Melencolia I includes a depiction of Dürer's solid, a convex polyhedron having the Dürer graph as its skeleton. Dürer's solid is one of only four well-covered simple convex polyhedra. Dürer's solid Dürer's solid is combinatorially equivalent to a cube with two opposite vertices truncated, although Dürer's depiction of it is not in this form but rather as a truncated rhombohedron or truncated triangular trapezohedron. The exact geometry of the solid depicted by Dürer is a subject of some academic debate, with different hypothetical values for its acute angles ranging from 72° to 82°. Graph-theoretic properties The Dürer graph is the graph formed by the vertices and edges of the Dürer solid. It is a cubic graph of girth 3 and diameter 4. As well as its construction as the skeleton of Dürer's solid, it can be obtained by applying a Y-Δ transform to the opposite vertices of a cube graph, or as the generalized Petersen graph G(6,2). As with any graph of a convex polyhedron, the Dürer graph is a 3-vertex-connected simple planar graph. The Dürer graph is a well-covered graph, meaning that all of its maximal independent sets have the same number of vertices, four. It is one of four well-covered cubic polyhedral graphs and one of seven well-covered cubic graphs. The only other three well-covered simple convex polyhedra are the tetrahedron, triangular prism, and pentagonal prism. The Dürer graph is Hamiltonian, with LCF notation [-4,5,2,-4,-2,5;-]. More precisely, it has exactly six Hamiltonian cycles, each pair of which may be mapped into each other by a symmetry of the graph. Symmetries The automorphism group both of the Dürer graph and of the Dürer solid (in either the truncated cube form or the form shown by Dürer) is isomorphic to the dihedral group of order 12, denoted D6. Gallery
Mathematics
Graph theory
null
23454306
https://en.wikipedia.org/wiki/Tropical%20horticulture
Tropical horticulture
Tropical horticulture is a branch of horticulture that studies and cultivates plants in the tropics, i.e., the equatorial regions of the world. The field is sometimes known by the portmanteau "TropHort". Overview Tropical horticulture includes plants such as perennial woody plants (arboriculture), ornamentals (floriculture), vegetables (olericulture), and fruits (pomology) including grapes (viticulture). The origin of many of these crops is not in the tropics but in temperate zones. Their adoption to tropical climatic conditions is an objective of breeding. Many important crops, however, are indigenous to the tropics. The latter embrace perennial crops such as oil palm, vegetables including okra, field crops such as rice and sugarcane, and particularly fruits including pineapple, banana, papaya, and mango. Since the tropics represent 36 percent of the Earth's surface and 20 percent of its land surface, the potential of tropical horticulture is huge. In contrast to temperate regions, environmental conditions in the tropics are defined less by seasonal temperature fluctuations and more by seasonality of precipitation. Thus the climate in the greater part of the tropics is characterized by distinct wet and dry seasons, although such variation is reduced in locations closer to the equator (±5° latitude). Temperature conditions in the tropics are affected by elevation, in which contrasting warmer and colder climate areas in the tropics can be differentiated, and highland areas in the tropics can consequently be more favourable for production of temperate plant species than are lowland areas. Types of plants Both vascular and non-vascular plants grow in tropical environments. Plants indigenous to the tropics are usually cold sensitive and adapted to receiving high levels of solar radiation. They are sensitive to small variations in photoperiod ("short day" plants), and can be adapted to extended drought, high precipitation and/or distinct wet and dry seasons. High night temperatures are a major hindrance to adopting temperate crops (e.g., tomatoes) to the tropical lowlands. Furthermore, such conditions promote high respiration rates of plants, resulting in comparably lower net photosynthesis rates.
Technology
Horticulture
null
33563967
https://en.wikipedia.org/wiki/Block%20%28periodic%20table%29
Block (periodic table)
A block of the periodic table is a set of elements unified by the atomic orbitals their valence electrons or vacancies lie in. The term seems to have been first used by Charles Janet. Each block is named after its characteristic orbital: s-block, p-block, d-block, f-block and g-block. The block names (s, p, d, and f) are derived from the spectroscopic notation for the value of an electron's azimuthal quantum number: sharp (0), principal (1), diffuse (2), and fundamental (3). Succeeding notations proceed in alphabetical order, as g, h, etc., though elements that would belong in such blocks have not yet been found. Characteristics There is an approximate correspondence between this nomenclature of blocks, based on electronic configuration, and sets of elements based on chemical properties. The s-block and p-block together are usually considered main-group elements, the d-block corresponds to the transition metals, and the f-block corresponds to the inner transition metals and encompasses nearly all of the lanthanides (like lanthanum, praseodymium and dysprosium) and the actinides (like actinium, uranium and einsteinium). The group 12 elements zinc, cadmium, and mercury are sometimes regarded as main group, rather than transition group, because they are chemically and physically more similar to the p-block elements than the other d-block elements. The group 3 elements are occasionally considered main group elements due to their similarities to the s-block elements. However, they remain d-block elements even when considered to be main group. Groups (columns) in the f-block (between groups 2 and 3) are not numbered. Helium is an s-block element, with its outer (and only) electrons in the 1s atomic orbital, although its chemical properties are more similar to the p-block noble gases in group 18 due to its full shell. s-block The s-block, with the s standing for "sharp" and azimuthal quantum number 0, is on the left side of the conventional periodic table and is composed of elements from the first two columns plus one element in the rightmost column, the nonmetals hydrogen and helium and the alkali metals (in group 1) and alkaline earth metals (group 2). Their general valence configuration is ns1–2. Helium is an s-element, but nearly always finds its place to the far right in group 18, above the p-element neon. Each row of the table has two s-elements. The metals of the s-block (from the second period onwards) are mostly soft and have generally low melting and boiling points. Most impart colour to a flame. Chemically, all s-elements except helium are highly reactive. Metals of the s-block are highly electropositive and often form essentially ionic compounds with nonmetals, especially with the highly electronegative halogen nonmetals. p-block The p-block, with the p standing for "principal" and azimuthal quantum number 1, is on the right side of the standard periodic table and encompasses elements in groups 13 to 18. Their general electronic configuration is ns2 np1–6. Helium, though being the first element in group 18, is not included in the p-block. Each row of the table has a place for six p-elements except for the first row (which has none). This block is the only one having all three types of elements: metals, nonmetals, and metalloids. The p-block elements can be described on a group-by-group basis as: group 13, the icosagens; 14, the crystallogens; 15, the pnictogens; 16, the chalcogens; 17, the halogens; and 18, the helium group, composed of the noble gases (excluding helium) and oganesson. Alternatively, the p-block can be described as containing post-transition metals; metalloids; reactive nonmetals including the halogens; and noble gases (excluding helium). The p-block elements are unified by the fact that their valence (outermost) electrons are in the p orbital. The p orbital consists of six lobed shapes coming from a central point at evenly spaced angles. The p orbital can hold a maximum of six electrons, hence there are six columns in the p-block. Elements in column 13, the first column of the p-block, have one p-orbital electron. Elements in column 14, the second column of the p-block, have two p-orbital electrons. The trend continues this way until column 18, which has six p-orbital electrons. The block is a stronghold of the octet rule in its first row, but elements in subsequent rows often display hypervalence. The p-block elements show variable oxidation states usually differing by multiples of two. The reactivity of elements in a group generally decreases downwards. (Helium breaks this trend in group 18 by being more reactive than neon, but since helium is actually an s-block element, the p-block portion of the trend remains intact.) The bonding between metals and nonmetals depends on the electronegativity difference. Ionicity is possible when the electronegativity difference is high enough (e.g. Li3N, NaCl, PbO). Metals in relatively high oxidation states tend to form covalent structures (e.g. WF6, OsO4, TiCl4, AlCl3), as do the more noble metals even in low oxidation states (e.g. AuCl, HgCl2). There are also some metal oxides displaying electrical (metallic) conductivity, like RuO2, ReO3, and IrO2. The metalloids tend to form either covalent compounds or alloys with metals, though even then ionicity is possible with the most electropositive metals (e.g. Mg2Si). d-block The d-block, with the d standing for "diffuse" and azimuthal quantum number 2, is in the middle of the periodic table and encompasses elements from groups 3 to 12; it starts in the 4th period. Periods from the fourth onwards have a space for ten d-block elements. Most or all of these elements are also known as transition metals because they occupy a transitional zone in properties, between the strongly electropositive metals of groups 1 and 2, and the weakly electropositive metals of groups 13 to 16. Group 3 or group 12, while still counted as d-block metals, are sometimes not counted as transition metals because they do not show the chemical properties characteristic of transition metals as much, for example, multiple oxidation states and coloured compounds. The d-block elements are all metals and most have one or more chemically active d-orbital electrons. Because there is a relatively small difference in the energy of the different d-orbital electrons, the number of electrons participating in chemical bonding can vary. The d-block elements have a tendency to exhibit two or more oxidation states, differing by multiples of one. The most common oxidation states are +2 and +3. Chromium, iron, molybdenum, ruthenium, tungsten, and osmium can have formal oxidation numbers as low as −4; iridium holds the singular distinction of being capable of achieving an oxidation state of +9, though only under far-from-standard conditions. The d-orbitals (four shaped as four-leaf clovers, and the fifth as a dumbbell with a ring around it) can contain up to five pairs of electrons. f-block The f-block, with the f standing for "fundamental" and azimuthal quantum number 3, appears as a footnote in a standard 18-column table but is located at the center-left of a 32-column full-width table, between groups 2 and 3. Periods from the sixth onwards have a place for fourteen f-block elements. These elements are generally not considered part of any group. They are sometimes called inner transition metals because they provide a transition between the s-block and d-block in the 6th and 7th row (period), in the same way that the d-block transition metals provide a transitional bridge between the s-block and p-block in the 4th and 5th rows. The f-block elements come in two series: lanthanum through ytterbium in period 6, and actinium through nobelium in period 7. All are metals. The f-orbital electrons are less active in the chemistry of the period 6 f-block elements, although they do make some contribution; these are rather similar to each other. They are more active in the early period 7 f-block elements, where the energies of the 5f, 7s, and 6d shells are quite similar; consequently these elements tend to show as much chemical variability as their transition metals analogues. The later period 7 f-block elements from about curium onwards behave more like their period 6 counterparts. The f-block elements are unified by mostly having one or more electrons in an inner f-orbital. Of the f-orbitals, six have six lobes each, and the seventh looks like a dumbbell with a donut with two rings. They can contain up to seven pairs of electrons; hence, the block occupies fourteen columns in the periodic table. They are not assigned group numbers, since vertical periodic trends cannot be discerned in a "group" of two elements. The two 14-member rows of the f-block elements are sometimes confused with the lanthanides and the actinides, which are names for sets of elements based on chemical properties more so than electron configurations. Those sets have 15 elements rather than 14, extending into the first members of the d-block in their periods, lutetium and lawrencium respectively. In many periodic tables, the f-block is shifted one element to the right, so that lanthanum and actinium become d-block elements, and Ce–Lu and Th–Lr form the f-block tearing the d-block into two very uneven portions. This is a holdover from early erroneous measurements of electron configurations, in which the 4f shell was thought to complete its filling only at lutetium. In fact ytterbium completes the 4f shell, and on this basis Lev Landau and Evgeny Lifshitz considered in 1948 that lutetium cannot correctly be considered an f-block element. Since then, physical, chemical, and electronic evidence has overwhelmingly supported that the f-block contains the elements La–Yb and Ac–No, as shown here and as supported by International Union of Pure and Applied Chemistry reports dating from 1988 and 2021. g-block A g-block, with azimuthal quantum number 4, is predicted to begin in the vicinity of element 121. Though g-orbitals are not expected to start filling in the ground state until around element 124–126 (see extended periodic table), they are likely already low enough in energy to start participating chemically in element 121, similar to the situation of the 4f and 5f orbitals. If the trend of the previous rows continued, then the g-block would have eighteen elements. However, calculations predict a very strong blurring of periodicity in the eighth period, to the point that individual blocks become hard to delineate. It is likely that the eighth period will not quite follow the trend of previous rows.
Physical sciences
Periodic table
Chemistry
33565234
https://en.wikipedia.org/wiki/Mediterranean%20forests%2C%20woodlands%2C%20and%20scrub
Mediterranean forests, woodlands, and scrub
Mediterranean forests, woodlands and scrub is a biome defined by the World Wide Fund for Nature. The biome is generally characterized by dry summers and rainy winters, although in some areas rainfall may be uniform. Summers are typically hot in low-lying inland locations but can be cool near colder seas. Winters are typically mild to cool in low-lying locations but can be cold in inland and higher locations. All these ecoregions are highly distinctive, collectively harboring 10% of the Earth's plant species. Distribution The Mediterranean forests, woodlands, and scrub biome mostly occurs in, but not limited to, the Mediterranean climate zones, in the mid-latitudes: the Mediterranean Basin the Chilean Matorral the California chaparral and woodlands ecoregion of California and the Baja California Peninsula the Western Cape of South Africa the southwest and southern Australia. The biome is not limited to the Mediterranean climate zone. It can also be present in other climate zones (which typically border the Mediterranean climate zone), such as the drier regions of the oceanic and humid subtropical climates, and as well as the lusher areas of the semi-arid climate zone. Non-Mediterranean climate regions that would feature Mediterranean vegetation include the Nile River Valley in Egypt (extending upstream along the riverbanks), parts of the Eastern Cape in South Africa, southeastern Australia, southeastern Azerbaijan, southeastern Turkey, far northern Iraq, the Mazandaran Province in Iran, Central Italy, parts of the Balkans (including Northern Greece), as well as Northern and Western Jordan. Vegetation Vegetation types range from forests to woodlands, savannas, shrublands, and grasslands; "mosaic habitat" landscapes are common, where differing vegetation types are interleaved with one another in complex patterns created by variations in soil, topography, exposure to wind and sun, and fire history. Much of the woody vegetation in Mediterranean-climate regions is sclerophyll, which means 'hard-leaved' in Greek. Sclerophyllous vegetation generally has small, dark leaves covered with a waxy outer layer to retain moisture in the dry summer months. Phytogeographers consider the fynbos (South Africa) as a separate floral kingdom because 68% of the 8,600 vascular plant species crowded into its are endemic and highly distinctive at several taxonomic levels. This is equivalent to about 40% of the plant species of the United States and Canada combined, found within an area the size of the state of Maine. The fynbos and Southwest Australia shrublands have flora that are significantly more diverse than the other ecoregions, although any Mediterranean shrubland is still rich in species and endemics relative to other non-forest ecoregions. Biome plant groups Major plant communities in this biome include: Forest: Mediterranean forests are generally composed of broadleaf trees, such as the oak and mixed sclerophyll forests of California and the Mediterranean region, the Eucalyptus forests of Southwest Australia, and the Nothofagus forests of central Chile. Forests are often found in riparian areas, where they receive more summer water. Coniferous forests also occur, especially around the Mediterranean. Pine and deciduous oak forest are widespread across California. Pinus halepensis, lentisk (Pistacia lentiscus), kermes oak (Quercus coccifera) and Chamaerops are found across Spanish Mediterranean forest. Woodland: Oak woodlands are characteristic of the Mediterranean Basin and in California. Pine woodlands are also present in the Mediterranean Basin. California additionally has walnut woodlands. Savanna and grassland: The California Central Valley grasslands are the largest Mediterranean grassland eco-region, although these grasslands have mostly been converted to agriculture. The remaining woodlands feature mainly oak, walnut and pine. The cork oak savanna in Portugal, known as montado, is a good example of a mediterranean savanna. Shrubland: Shrublands are dense thickets of evergreen sclerophyll shrubs and small trees. They are most common near the seacoast, and are often adapted to wind and salt air from the ocean. They are called chaparral (California and southern Portugal), matorral in Chile and southern Spain, garrigue or maquis in France, macchia or gariga in Italy, phrygana in Greece, tomillares in Spain, fynbos, renosterveld, Succulent Karoo, and strandveld in South Africa, kwongan in Southwest Australia and batha in Israel. Northern coastal scrub and coastal sage scrub, also known as soft chaparral, occur near the California coast. In some places shrublands are of the mature vegetation type, and in other places are the result of degradation of former forest or woodland by logging or overgrazing, or disturbance by major fires. Fire as a medium of change Fire, both natural and human-caused, has played a large role in shaping the ecology of Mediterranean ecoregions. The hot, dry summers make much of the region prone to fires, and lightning-caused fires occur with some frequency. Many of the plants are pyrophytes, or fire-loving, adapted or even depending on fire for reproduction, recycling of nutrients, and the removal of dead or senescent vegetation. In both the Australian and Californian Mediterranean-climate eco-regions, native peoples used fire extensively to clear brush and trees, making way for the grasses and herbaceous vegetation that supported game animals and useful plants. The plant communities in these areas adapted to the frequent human-caused fires, and pyrophyte species grew more common and more fire-loving, while plants that were poorly adapted to fire retreated. After European colonization of these regions, fires were suppressed, which has caused some unintended consequences in these ecoregions; fuel builds up, so that when fires do come they are much more devastating, and some species dependent on fire for their reproduction are now threatened. The European shrublands have also been shaped by anthropogenic fire, historically associated with transhumance herding of sheep and goats. Though adapted to infrequent fires, chaparral plant communities can be eliminated by frequent fires. A high frequency of fire (less than ten years) will result in the loss of obligate seeding shrub species such as Manzanita spp. This high frequency disallows seeder plants to reach their reproductive size before the next fire and the community shifts to a sprouter-dominance. If high frequency fires continue over time, obligate resprouting shrub species can also be eliminated by exhausting their energy reserves below-ground. Today, frequent accidental ignitions can convert chaparral from a native shrubland to non-native annual grassland and drastically reduce species diversity, especially under drought brought about by climate change. On 25 July 2023, devastating wildfires were burning in at least nine countries across the Mediterranean, including Croatia, Italy, and Portugal, with thousands of firefighters in Europe and North Africa working to contain flames stoked by high temperatures, dry conditions, and strong winds. The wildfires led to casualties, evacuations of thousands of people, and widespread destruction of homes and forests. Degradation Mediterranean ecoregions are some of the most endangered and vulnerable on the planet. Many have suffered tremendous degradation and habitat loss through logging, overgrazing, conversion to agriculture, urbanization, fire suppression, and introduction of exotic and invasive species. The ecoregions around the Mediterranean basin and in California have been particularly affected by degradation due to human activity, suffering extensive loss of forests and soil erosion, and many native plants and animals have become extinct or endangered.
Physical sciences
Biomes: General
Earth science
46438704
https://en.wikipedia.org/wiki/Inoculation
Inoculation
Inoculation is the act of implanting a pathogen or other microbe or virus into a person or other organism. It is a method of artificially inducing immunity against various infectious diseases. The term "inoculation" is also used more generally to refer to intentionally depositing microbes into any growth medium, as into a Petri dish used to culture the microbe, or into food ingredients for making cultured foods such as yoghurt and fermented beverages such as beer and wine. This article is primarily about the use of inoculation for producing immunity against infection. Inoculation has been used to eradicate smallpox and to markedly reduce other infectious diseases such as polio. Although the terms "inoculation", "vaccination", and "immunization" are often used interchangeably, there are important differences. Inoculation is the act of implanting a pathogen or microbe into a person or other recipient; vaccination is the act of implanting or giving someone a vaccine specifically; and immunization is the development of disease resistance that results from the immune system's response to a vaccine or natural infection. Terminology Until the early 1800s inoculation referred only to variolation (from the Latin word variola = smallpox), the predecessor to the smallpox vaccine. The smallpox vaccine, introduced by Edward Jenner in 1796, was called cowpox inoculation or vaccine inoculation (from Latin vacca = cow). Smallpox inoculation continued to be called variolation, whereas cowpox inoculation was called vaccination (from Jenner's term variolae vaccinae = smallpox of the cow). Louis Pasteur proposed in 1861 to extend the terms vaccine and vaccination to include the new protective procedures being developed. Immunization refers to the use of vaccines as well as the use of antitoxin, which contains pre-formed antibodies such as to diphtheria or tetanus exotoxins. In nontechnical usage inoculation is now more or less synonymous with protective injections and other methods of immunization. Inoculation also has a specific meaning for procedures done in vitro (in glass, i.e. not in a living body). These include the transfer of microorganisms into and from laboratory apparatus such as test tubes and petri dishes in research and diagnostic laboratories, and also in commercial applications such as brewing, baking, oenology (wine making), and the production of antibiotics. For example, blue cheese is made by inoculating it with Penicillium roqueforti mold, and often certain bacteria. Etymology The term inoculate entered medical English through horticultural usage meaning to graft a bud from one plant into another. It derives from Latin in- 'in' + oculus 'eye' (and by metaphor, 'bud'). (The term innocuous is unrelated, as it derives from Latin in- 'not' + nocuus 'harmful'.) Origins Inoculation originated as a method for the prevention of smallpox by deliberate introduction of material from smallpox pustules from one person into the skin of another. The usual route of transmission of smallpox was through the air, invading the mucous membranes of the mouth, nose, or respiratory tract, before migrating throughout the body via the lymphatic system, resulting in an often severe disease. In contrast, infection of the skin usually led to a milder, localized infectionbut, crucially, still induced immunity to the virus. This first method for smallpox prevention, smallpox inoculation, is now also known as variolation. Inoculation has ancient origins, and the technique was known in India, Africa, and China. China The earliest hints of the practice of inoculation for smallpox in China come during the 10th century. A Song dynasty (960–1279) chancellor of China, Wang Dan (957–1017), lost his eldest son to smallpox and sought a means to spare the rest of his family from the disease, so he summoned physicians, wise men, and magicians from all across the empire to convene at the capital in Kaifeng and share ideas on how to cure patients of it until an allegedly divine man from Mount Emei carried out inoculation. However, the sinologist Joseph Needham states that this information comes from the Zhongdou xinfa (種痘心法) written in 1808 by Zhu Yiliang, centuries after the alleged events. The first clear and credible reference to smallpox inoculation in China comes from Wan Quan's (1499–1582) Douzhen Xinfa (痘疹心法) of 1549, which states that some women unexpectedly menstruate during the procedure, yet his text did not give details on techniques of inoculation. Inoculation was first vividly described by Yu Chang in his book Yuyi cao (寓意草), or
Biology and health sciences
Concepts
Health
28203358
https://en.wikipedia.org/wiki/Martian%20polar%20ice%20caps
Martian polar ice caps
The planet Mars has two permanent polar ice caps of water ice and some dry ice (frozen carbon dioxide, CO2). Above kilometer-thick layers of water ice permafrost, slabs of dry ice are deposited during a pole's winter, lying in continuous darkness, causing 25–30% of the atmosphere being deposited annually at either of the poles. When the poles are again exposed to sunlight, the frozen CO2 sublimes. These seasonal actions transport large amounts of dust and water vapor, giving rise to Earth-like frost and large cirrus clouds. The caps at both poles consist primarily of water ice. Frozen carbon dioxide accumulates as a comparatively thin layer about one metre thick on the north cap in the northern winter, while the south cap has a permanent dry ice cover about 8 m thick. The northern polar cap has a diameter of about 1000 km during the northern Mars summer, and contains about 1.6 million cubic km of ice, which if spread evenly on the cap would be 2 km thick. (This compares to a volume of 2.85 million cubic km (km3) for the Greenland ice sheet.) The southern polar cap has a diameter of 350 km and a thickness of 3 km. The total volume of ice in the south polar cap plus the adjacent layered deposits has also been estimated at 1.6 million cubic km. Both polar caps show spiral troughs, which analysis of SHARAD ice penetrating radar has shown are a result of roughly perpendicular katabatic winds that spiral due to the Coriolis Effect. The seasonal frosting of some areas near the southern ice cap results in the formation of transparent 1 m thick slabs of dry ice above the ground. With the arrival of spring, sunlight warms the subsurface and pressure from subliming CO2 builds up under a slab, elevating and ultimately rupturing it. This leads to geyser-like eruptions of CO2 gas mixed with dark basaltic sand or dust. This process is rapid, observed happening in the space of a few days, weeks or months, a rate of change rather unusual in geology—especially for Mars. The gas rushing underneath a slab to the site of a geyser carves a spider-like pattern of radial channels under the ice. In 2018, Italian scientists reported that measurements of radar reflections may show a subglacial lake on Mars, below the surface of the southern polar layered deposits (not under the visible permanent ice cap), and about across; If confirmed, this would be the first known stable body of water on the planet. However, the radar reflections may show solid minerals or saline ice instead of liquid water. Shared features Freezing of atmosphere Research based on slight changes in the orbits of spacecraft around Mars over 16 years found that each winter, approximately 3 trillion to 4 trillion tons of carbon dioxide freezes out of the atmosphere onto the winter hemisphere polar cap. This represents 12 to 16 percent of the mass of the entire Martian atmosphere. These observations support predictions from the Mars Global Reference Atmospheric Model—2010. Layers Both polar caps show layered features, called polar-layered deposits, that result from seasonal ablation and accumulation of ice together with dust from Martian dust storms. Information about the past climate of Mars may be eventually revealed in these layers, just as tree ring patterns and ice core data do on Earth. Both polar caps also display grooved features, probably caused by wind flow patterns. The grooves are also influenced by the amount of dust. The more dust, the darker the surface. The darker the surface, the more melting. Dark surfaces absorb more light energy. There are other theories that attempt to explain the large grooves. China's Zhurong rover that has studied the Utopia Planitia region of Mars has found dunes that lie in different directions. The bright barchans and dark longitudinal dunes is evidence that the predominant wind field underwent a roughly 70° change. The researchers believe the dunes were formed when the tilt changed and caused a shift in the winds. At about the same time, there are changes in the layers in the Martian northern ice caps. Deuterium enrichment Deuterium is a heavier isotope of hydrogen compared to the element's most common isotope, protium. This makes any celestial body's deuterium statistically much less prone to being carried into space by stellar wind compared to its protium. Evidence that Mars once had enough water to create a global ocean at least 137 m deep has been obtained from measurement of the HDO to H2O ratio over the north polar cap. In March 2015, a team of scientists published results showing that the polar cap ice is about eight times as enriched with deuterium as water in Earth's oceans. This means that Mars has lost a volume of water 6.5 times as large as that stored in today's polar caps. The water for a time may have formed an ocean in the low-lying Vastitas Borealis and adjacent lowlands (Acidalia, Arcadia and Utopia planitiae). Had the water ever all been liquid and on the surface, it would have covered 20% of the planet and in places would have been almost a mile deep. This international team used ESO's Very Large Telescope, along with instruments at the W. M. Keck Observatory and the NASA Infrared Telescope Facility, to map out different isotopic forms of water in Mars's atmosphere over a six-year period. North polar cap The bulk of the northern ice cap consists of water ice; it also has a thin seasonal veneer of dry ice, solid carbon dioxide. Each winter the ice cap grows by adding 1.5 to 2 m of dry ice. In summer, the dry ice sublimates (goes directly from a solid to a gas) into the atmosphere. Mars has seasons that are similar to Earth's, because its rotational axis has a tilt close to our own Earth's (25.19° for Mars, 23.44° for Earth). During each year on Mars as much as a third of Mars' thin carbon dioxide (CO2) atmosphere "freezes out" during the winter in the northern and southern hemispheres. Scientists have even measured tiny changes in the gravity field of Mars due to the movement of carbon dioxide. The ice cap in the north is of a lower altitude (base at −5000 m, top at −2000 m) than the one in the south (base at 1000 m, top at 3500 m). It is also warmer, so all the frozen carbon dioxide disappears each summer. The part of the cap that survives the summer is called the north residual cap and is made of water ice. This water ice is believed to be as much as three kilometers thick. The much thinner seasonal cap starts to form in the late summer to early fall when a variety of clouds form. Called the polar hood, the clouds drop precipitation which thickens the cap. The north polar cap is symmetrical around the pole and covers the surface down to about 60 degrees latitude. High resolution images taken with NASA's Mars Global Surveyor show that the northern polar cap is covered mainly by pits, cracks, small bumps and knobs that give it a cottage cheese look. The pits are spaced close together relative to the very different depressions in the south polar cap. Both polar caps show layered features that result from seasonal melting and deposition of ice together with dust from Martian dust storms. These polar layered deposits lie under the permanent polar caps. Information about the past climate of Mars may be eventually revealed in these layers, just as tree ring patterns and ice core data do on Earth. Both polar caps also display grooved features, probably caused by wind flow patterns and sun angles, although there are several theories that have been advanced. The grooves are also influenced by the amount of dust. The more dust, the darker the surface. The darker the surface, the more melting. Dark surfaces absorb more light energy. One large valley, Chasma Boreale runs halfway across the cap. It is about 100 km wide and up to 2 km deep—that's deeper than Earth's Grand Canyon. When the tilt or obliquity changes the size of the polar caps change. When the tilt is at its highest, the poles receive far more sunlight and for more hours each day. The extra sunlight causes the ice to melt, so much so that it could cover parts of the surface in 10 m of ice. Much evidence has been found for glaciers that probably formed when this tilt-induced climate change occurred. Research reported in 2009 shows that the ice rich layers of the ice cap match models for Martian climate swings. NASA's Mars Reconnaissance Orbiter's radar instrument can measure the contrast in electrical properties between layers. The pattern of reflectivity reveals the pattern of material variations within the layers. Radar produced a cross-sectional view of the north-polar layered deposits of Mars. High-reflectivity zones, with multiple contrasting layers, alternate with zones of lower reflectivity. Patterns of how these two types of zones alternate can be correlated to models of changes in the tilt of Mars. Since the top zone of the north-polar layered deposits—the most recently deposited portion—is strongly radar-reflective, the researchers propose that such sections of high-contrast layering correspond to periods of relatively small swings in the planet's tilt because the Martian axis has not varied much recently. Dustier layers appear to be deposited during periods when the atmosphere is dustier. Research, published in January 2010 using HiRISE images, says that understanding the layers is more complicated than was formerly believed. The brightness of the layers does not just depend on the amount of dust. The angle of the sun together with the angle of the spacecraft greatly affect the brightness seen by the camera. This angle depends on factors such as the shape of the trough wall and its orientation. Furthermore, the roughness of the surface can greatly change the albedo (amount of reflected light). In addition, many times what one is seeing is not a real layer, but a fresh covering of frost. All of these factors are influenced by the wind which can erode surfaces. The HiRISE camera did not reveal layers that were thinner than those seen by the Mars Global Surveyor. However, it did see more detail within layers. Radar measurements of the north polar ice cap found the volume of water ice in the layered deposits of the cap was , which is equal to 30% of the Earth's Greenland ice sheet. (The layered deposits overlie an additional basal deposit of ice.) The radar is on board the Mars Reconnaissance Orbiter. SHARAD radar data when combined to form a 3D model reveal buried craters. These may be used to date certain layers. In February 2017, ESA released a new view of Mars's North Pole. It was a mosaic made from 32 individual orbits of the Mars Express. In a paper published in Nature in 2023, researches found an abrupt brightness increase in the northern ice cap layers that happened at roughly 0.4 million years ago. This change may have caused changes in wind direction that are seen in regions explored by the Zhuroug rover. South polar cap The south polar permanent cap is much smaller than the one in the north. It is 400 km in diameter, as compared to the 1100 km diameter of the northern cap. Each southern winter, the ice cap covers the surface to a latitude of 50°. Part of the ice cap consists of dry ice, solid carbon dioxide. Each winter the ice cap grows by adding 1.5 to 2 meters of dry ice from precipitation from a polar-hood of clouds. In summer, the dry ice sublimates (goes directly from a solid to a gas) into the atmosphere. During each year on Mars as much as a third of Mars' thin carbon dioxide (CO2) atmosphere "freezes out" during the winter in the northern and southern hemispheres. Scientists have even measured tiny changes in the gravity field of Mars due to the movement of carbon dioxide. In other words, the winter buildup of ice changes the gravity of the planet. Mars has seasons that are similar to Earth's because its rotational axis has a tilt close to our own Earth's (25.19° for Mars, 23.45° for Earth). The south polar cap is higher in altitude and colder than the one in the north. The residual southern ice cap is displaced; that is, it is not centered on the south pole. However, the south seasonal cap is centered near the geographic pole. Studies have shown that the off center cap is caused by much more snow falling on one side than the other. On the western hemisphere side of the south pole a low pressure system forms because the winds are changed by the Hellas Basin. This system produces more snow. On the other side, there is less snow and more frost. Snow tends to reflect more sunlight in the summer, so not much melts or sublimates (Mars climate causes snow to go directly from a solid to a gas). Frost, on the other hand has a rougher surface and tends to trap more sunlight, resulting in more sublimation. In other words, areas with more of the rougher frost are warmer. Research published in April 2011, described a large deposit of frozen carbon dioxide near the south pole. Most of this deposit probably enters Mars' atmosphere when the planet's tilt increases. When this occurs, the atmosphere thickens, winds get stronger, and larger areas on the surface can support liquid water. Analysis of data showed that if these deposits were all changed into gas, the atmospheric pressure on Mars would double. There are three layers of these deposits; each is capped with a 30-meter layer of water ice that prevents the CO2 from sublimating into the atmosphere. In sublimation a solid material goes directly into a gas phase. These three layers are linked to periods when the atmosphere collapsed when the climate changed. A large field of eskers exist around the south pole, called the Dorsa Argentea Formation, it is believed to be the remains of a giant ice sheet. This large polar ice sheet is believed to have covered about 1.5 million square kilometers. That area is twice the area of the state of Texas. In July 2018 ESA discovered indications of liquid salt water buried under layers of ice and dust by analyzing the reflection of radar pulses generated by Mars Express. Swiss cheese appearance While the north polar cap of Mars has a flat, pitted surface resembling cottage cheese, the south polar cap has larger pits, troughs and flat mesas that give it a Swiss cheese appearance. The upper layer of the Martian south polar residual cap has been eroded into flat-topped mesas with circular depressions. Observations made by Mars Orbiter Camera in 2001 have shown that the scarps and pit walls of the south polar cap had retreated at an average rate of about since 1999. In other words, they were retreating 3 meters per Mars year. In some places on the cap, the scarps retreat less than 3 meters a Mars year, and in others it can retreat as much as per Martian year. Over time, south polar pits merge to become plains, mesas turn into buttes, and buttes vanish forever. The round shape is probably aided in its formation by the angle of the sun. In the summer, the sun moves around the sky, sometimes for 24 hours each day, just above the horizon. As a result, the walls of a round depression will receive more intense sunlight than the floor; the wall will melt far more than the floor. The walls melt and recede, while the floor remains the same. Later research with the powerful HiRISE showed that the pits are in a 1–10 meter thick layer of dry ice that is sitting on a much larger water ice cap. Pits have been observed to begin with small areas along faint fractures. The circular pits have steep walls that work to focus sunlight, thereby increasing erosion. For a pit to develop a steep wall of about 10 cm and a length of over 5 meters in necessary. The pictures below show why it is said the surface resembles Swiss cheese; one can also observe the differences over a two-year period. Starburst channels or spiders Starburst channels are patterns of channels that radiate out into feathery extensions. They are caused by gas which escapes along with dust. The gas builds up beneath translucent ice as the temperature warms in the spring. Typically 500 meters wide and 1 meter deep, the spiders may undergo observable changes in just a few days. One model for understanding the formation of the spiders says that sunlight heats dust grains in the ice. The warm dust grains settle by melting through the ice while the holes are annealed behind them. As a result, the ice becomes fairly clear. Sunlight then reaches the dark bottom of the slab of ice and changes the solid carbon dioxide ice into a gas which flows toward higher regions that open to the surface. The gas rushes out carrying dark dust with it. Winds at the surface will blow the escaping gas and dust into dark fans that we observe with orbiting spacecraft. The physics of this model is similar to ideas put forth to explain dark plumes erupting from the surface of Triton. Research, published in January 2010 using HiRISE images, found that some of the channels in spiders grow larger as they go uphill since gas is doing the erosion. The researchers also found that the gas flows to a crack that has occurred at a weak point in the ice. As soon as the sun rises above the horizon, gas from the spiders blows out dust which is blown by wind to form a dark fan shape. Some of the dust gets trapped in the channels. Eventually frost covers all the fans and channels until the next spring when the cycle repeats. Layers Chasma Australe, a major valley, cuts across the layered deposits in the South Polar cap. On the 90 E side, the deposits rest on a major basin, called Prometheus. Some of the layers in the south pole also show polygonal fracturing in the form of rectangles. It is thought that the fractures were caused by the expansion and contraction of water ice below the surface. Gallery
Physical sciences
Solar System
Astronomy
28208195
https://en.wikipedia.org/wiki/Extended%20breastfeeding
Extended breastfeeding
In Western countries extended breastfeeding usually means breastfeeding after the age of 12 to 24 months, depending on the culture. Breast milk is known to contain lactoferrin, which protects the infant from infection caused by a wide range of pathogens. The amount of lactoferrin in breast milk increases significantly during the months of 12 through 24 and remains elevated for as long as the infant continues to nurse. Research shows breastfed toddlers aged over 12 months have fewer and lower mortality rates. La Leche League writes that extended nursing provides comfort, security, and a way to calm down for the toddler, while the mother enjoys a feeling of closeness with her child. In most Western countries, extended breastfeeding is not a cultural norm and a person may face judgement with some critics saying that extended nursing is harmful. However, the American Academy of Family Physicians states there is no evidence that extended breastfeeding is harmful to the parent or child. The Academy of American Pediatrics makes a similar claim saying they find "no evidence of psychologic or developmental harm from breastfeeding into the third year of life or longer." Recommendations The World Health Organization and UNICEF recommends babies should be breastfed for at least two years. The American Academy of Family Physicians (AAFP) states that "[h]ealth outcomes for mothers and babies are best when breastfeeding continues for at least two years and continues as long as mutually desired by the parent and child. The American Academy of Pediatrics (AAP) recommends "continuation of breastfeeding for 1 year or longer as mutually desired by mother and infant". The CDC reports that about 36% of babies are still nursing at 12 months, while about 15% are still doing so by 18 months. Most toddlers naturally wean sometime between the ages of 2 and 4. Health benefits Longitudinal research shows breastfed toddlers aged over 12 months have fewer illnesses and lower mortality rates. Breast milk is known to contain lactoferrin (Lf), which protects the infant from infection caused by a wide range of pathogens. The amount of Lf in breast milk is lactation-stage related. One study looked at Lf concentration in prolonged lactation from the first to the 48th month postpartum. It was found to be at the highest level in colostrum, dropped to the lowest level during 1 - 12 months of lactation, and then increased significantly during the 13-24 months of lactation, close to the Lf concentration in colostrum. At over 24 months the level dropped, though not significantly. These have been shown to support the child's immune system's antibodies. Psychological effects In A Time to Wean, Katherine Dettwyler states that "Western, industrialized societies can compensate for some (but not all) of the immunological benefits of breastfeeding with antibiotics, vaccines and improved sanitation. But the physical, cognitive, and emotional needs of the young child persist." Many children who are breast-fed into their toddler years use the milk as a comforting, bonding moment with their mothers. The La Leche League writes: Toddlers breastfeed for many of the same reasons babies breastfeed: for nutrition, comfort, security, for a way to calm down and for reassurance. Mothers breastfeed their toddlers for many of the same reasons they breastfeed their babies: they recognize their children’s needs, they enjoy the closeness, they want to offer comfort, and they understand the health benefits. While the personalized nutrients of their mother's breastmilk is beneficial to the child no matter how it is delivered (bottle or breast), being fed breastmilk through a bottle takes away some of the benefits of traditional breastfeeding. The physical contact that comes with traditional breastfeeding increases the release of oxytocin in both the mother and child's blood stream. This hormone is frequently referred to as the "love hormone" and plays an important role in the development of trust and bonding within a relationship. On top of the emotional bonding that comes with breastfeeding, it has been found that children who are breastfed develop language, intellectual, and motor skills both quicker and easier than those who are not and are less likely to contract a variety of viruses and diseases. Social acceptance In most Western countries, extended breastfeeding is not a cultural norm and a person may face judgement and shaming. The American Academy of Family Physicians states, "There is no evidence that extended breastfeeding is harmful to parent or child." The Academy of American Pediatrics makes a similar claim saying they find "no evidence of psychologic or developmental harm from breastfeeding into the third year of life or longer." Practice by country or region North America Elizabeth Baldwin says in Extended Breastfeeding and the Law, "Because our culture tends to view the breast as sexual, it can be hard for people to realize that breastfeeding is the natural way to nurture children." In Western countries such as the United States, Canada and the United Kingdom, extended breastfeeding is a taboo act. It is difficult to obtain accurate information and statistics about extended breastfeeding in these countries because of the mother's embarrassment. Mothers who nurse longer than the social norm sometimes hide their practices from all but very close family members and friends. This is called "closet nursing". In the United States, breastfeeding beyond 1 year is considered extended breastfeeding, and in contrast to WHO recommendations which recommend exclusive breastfeeding until six months, and "continued breastfeeding up to 2 years of age or beyond" [with the addition of complimentary foods], the American Academy of Pediatrics stated in 1997 that, "Breastfeeding should be continued for at least the first year of life and beyond for as long as mutually desired by mother and child". In the United States overall, according to a 2010 CDC "report card", 43% of babies are breastfed until 6 months and 22.4% are breastfed until 12 months, though breastfeeding rates varied among the states. Breastfeeding rates in the U.S. at 6 months rose from 34.2% in 2000 to 43.5% in 2006 and the rates at 12 months rose from 15.7% in 2000 to 22.7% in 2006. The U.S. Healthy People 2010 goals were to have at least 60% of babies exclusively breastfed at 3 months and 25% of babies exclusively breastfed at 6 months so this goal has yet to be met. There have been several cases in the United States where children have been taken away from their mother's care because the courts or government agencies found the mother's extended breastfeeding to be inappropriate. In 1992, a New York mother lost custody of her child for a year. She was still breastfeeding the child at age 3 and had reported experiences of sexual arousal while breastfeeding the child. The authorities took the child from the home in the fear that the mother might sexually abuse the child. Later, the social service agency that took over the case said that there was more to the case than could be released to the press due to confidentiality laws. In 2000, an Illinois child was removed from the mother's care after a judge ruled that the child might suffer emotional damage as a result of not being weaned. The child was later returned to the mother and the judge vacated the finding of neglect. A social service agency in Colorado removed a 5-year-old child from the mother because she was still breastfeeding, but the court ordered the child returned to its family immediately. Africa Guinea-Bissau In Guinea-Bissau, the average length of breastfeeding is 22.6 months. Asia and Oceania India In India, mothers commonly breastfed their children until 2 to 3 years of age. Cows milk is given in combination with breast milk though use of formula has been on the rise. As of November 2012, the Ministry of Women and Child Development, with UNICEF as a technical partner, have kicked off a nationwide campaign to promote exclusive breastfeeding to infants up to the age of six months - one among a series of advisories it is issuing - as part of an awareness program targeted at eradicating malnutrition in the country. Indian actor Aamir Khan serves as the brand ambassador, and has acted in numerous televised public service announcements. Philippines In the Philippines, the Implementing Rules and Regulations of the Milk Code require that breastfeeding be encouraged for babies up to the age of 2 years old or beyond. Under the same code, it is illegal to advertise infant formula or breastmilk substitutes intended for children 24 months old and below. However, a 2008 WHO survey found that on average, mothers in the Philippines breastfed their babies until 14 months of age, with breastfeeding lasting up to 17 months on average in rural areas. Almost 58% of mothers surveyed around the nation were still breastfeeding their babies when the babies were a year old, and 34.2% of mothers were still breastfeeding when their babies were 2 years old. In 2012, it was reported that legislation had been introduced which would narrow down the application of the Milk Code (reducing the period recommending against artificial baby foods for babies from 0 to 36 months to 0 to six months only), would lift the restriction on donations of artificial milk products in emergency situations (encouraging mothers with disabilities to shift to milk substitutes instead of encouraging them to continue breastfeeding assisted by support persons), would change the legally mandated lactation break period for breastfeeding mothers from paid to unpaid status, and would remove the ban on milk companies giving away free samples of artificial milk products in the health care system. In religion Islam The central scripture of Islam, al-Quran, instructs that children be breastfed for two years from birth. Islam relies on the Islamic calendar, in which "year" refers to a lunar year of 12 lunar cycles, totaling 354 days in length, potentially with the addition of 1 day for a leap year.
Biology and health sciences
Health and fitness: General
Health
26354203
https://en.wikipedia.org/wiki/Batomorphi
Batomorphi
Batomorphi is a clade of cartilaginous fishes, commonly known as rays, this taxon is also known as the superorder Batoidea, but the 5th edition of Fishes of the World classifies it as the division Batomorphi. They and their close relatives, the sharks, compose the subclass Elasmobranchii. Rays are the largest group of cartilaginous fishes, with well over 600 species in 26 families. Rays are distinguished by their flattened bodies, enlarged pectoral fins that are fused to the head, and gill slits that are placed on their ventral surfaces. Anatomy Batoids are flat-bodied, and, like sharks, are cartilaginous fish, meaning they have a boneless skeleton made of a tough, elastic cartilage. Most batoids have five ventral slot-like body openings called gill slits that lead from the gills, but the Hexatrygonidae have six. Batoid gill slits lie under the pectoral fins on the underside, whereas a shark's are on the sides of the head. Most batoids have a flat, disk-like body, with the exception of the guitarfishes and sawfishes, while most sharks have a spindle-shaped body. Many species of batoid have developed their pectoral fins into broad flat wing-like appendages. The anal fin is absent. The eyes and spiracles are located on top of the head. Batoids have a ventrally located mouth and can considerably protrude their upper jaw (palatoquadrate cartilage) away from the cranium to capture prey. The jaws have euhyostylic type suspension, which relies completely on the hyomandibular cartilages for support. Bottom-dwelling batoids breathe by taking water in through the spiracles, rather than through the mouth as most fish do, and passing it outward through the gills. Reproduction Batoids reproduce in a number of ways. As is characteristic of elasmobranchs, batoids undergo internal fertilization. Internal fertilization is advantageous to batoids as it conserves sperm, does not expose eggs to consumption by predators, and ensures that all the energy involved in reproduction is retained and not lost to the environment. All skates and some rays are oviparous (egg laying) while other rays are ovoviviparous, meaning that they give birth to young which develop in a womb but without involvement of a placenta. The eggs of oviparous skates are laid in leathery egg cases that are commonly known as mermaid's purses and which often wash up empty on beaches in areas where skates are common. Capture-induced premature birth and abortion (collectively called capture-induced parturition) occurs frequently in sharks and rays when fished. Capture-induced parturition is rarely considered in fisheries management despite being shown to occur in at least 12% of live bearing sharks and rays (88 species to date). Habitat Most species live on the sea floor, in a variety of geographical regions – mainly in coastal waters, although some live in deep waters to at least . Most batoids have a cosmopolitan distribution, preferring tropical and subtropical marine environments, although there are temperate and cold-water species. Only a few species, like manta rays, live in the open sea, and only a few live in freshwater, while some batoids can live in brackish bays and estuaries. Feeding Most batoids have developed heavy, rounded teeth for crushing the shells of bottom-dwelling species such as snails, clams, oysters, crustaceans, and some fish, depending on the species. Manta rays feed on plankton. Evolution Batoids belong to the ancient lineage of cartilaginous fishes. Fossil denticles (tooth-like scales in the skin) resembling those of today's chondrichthyans date at least as far back as the Ordovician, with the oldest unambiguous fossils of cartilaginous fish dating from the middle Devonian. A clade within this diverse family, the Neoselachii, emerged by the Triassic, with the best-understood neoselachian fossils dating from the Jurassic. The oldest confirmed ray is Antiquaobatis, from the Pliensbachian of Germany. The clade is represented today by sharks, sawfish, rays and skates. Classification Molecular evidence refutes the hypothesis that skates and rays are derived sharks. The monophyly of the skates, the stingrays, and the electric rays has long been generally accepted. Along with Rhinopristiformes, these comprise the four traditionally accepted major batoid lineages, as in Nelson's 2006 Fishes of the World. However, the exact phylogeny of the major batoid lineages, internally and with respect to one another, has been subject to diverse treatments. The following cladogram is based on a comprehensive morphological assessment of batoid phylogeny published in 2004: However, a 2011 study significantly reevaluated the phylogeny of batoids, using nuclear and mitochondrial DNA from 37 taxa, representing almost all recognized families and all of the traditional four major lineages. This is a far more numerous and diverse set of sample taxa than in any previous study, producing findings reflected in the cladogram below. This study strongly confirmed the traditionally accepted internal monophyly of skates, stingrays, and electric rays. It also recovered panrays as sister to the stingrays, as older morphological analyses had suggested. However, it found the Rhinopristiformes, including the sawfishes and various "guitarfishes", to be paraphyletic, comprising two distinct clades. Referred to as "Guitarfishes 1" and "Guitarfishes 2", the former contains only the Trygonorrhinidae, while the latter contains the remainder of Rhinopristiformes (the families Glaucostegidae, Pristidae, Rhinidae, and Rhinobatidae). In addition, while traditional phylogenies often find electric rays to be the basalmost batoids, followed by the Rhinopristiformes, this analysis finds a polytomy between skates, electric rays, and thornbacks at the base of Batoidea, with weak support for skates being the actual most basal lineage, followed by a clade uniting the electric rays and thornbacks. The Mesozoic Sclerorhynchoidea are basal or incertae sedis; they show features of the Rajiformes but have snouts resembling those of sawfishes. However, evidence indicates they are probably the sister group to sawfishes. Eschmeyer's Catalog of Fishes classigies the rays as follows: Order Torpediniformes Family Platyrhinidae D. S. Jordan, 1923 (thornbacks or fanrays) Family Narkidae Fowler, 1934 (sleeper rays) Family Narcinidae, Gill, 1862 (electric rays) Family Hypnidae Gill, 1862 (coffin rays) Family Torpedinidae Henle 1834 (torpedo electric rays or torpedo rays) Order Rhinopristiformes Family Trygonorrhinidae Last, Séret & Naylor, 2016 (fiddler rays or banjo rays) Family Rhinobatidae Bonaparte, 1835 (guitarfishes) Family Rhinidae J. P. Müller & Henle, 1841 (bowmouth guitarfishes or wedgefishes) Family Glaucostegidae Last, Séret & Naylor, 2016 (giant guitarfishes) Family Pristidae Bonaparte, 1835 (sawfishes) Order Rajiformes Family Rajidae Blainville, 1816 (hardnose skates) Family Arhynchobatidae Fowler, 1934 (softnose skates or longtail skates) Family Gurgesiellidae de Buen, 1959 (pygmy skates) Family Anacanthobatidae von Bonde & Swart, 1923 (legskates or smooth skates) Order Myliobatiformes Family Zanobatidae Fowler. 1934 (panrays) Family Hexatrygonidae Heemstra & M. M. Smith, 1980 (sixgill stingrays) Family Dasyatidae D. S. Jordan & Gilbert, 1879 (whiptail stingrays) Subfamily Dasyatinae D. S. Jordan & Gilbert, 1879 (stingrays) Subfamily Neotrygoninae Castelnau, 1873 (shortsnout stingrays) Subfamily Urogymninae Gray, 1851 (whiprays) Subfamily Hypolophinae Stromer, 1910 (cowtail stingrays) Family Potamotrygonidae Garman, 1877 (neotropical stingrays) Subfamily Styracurinae Carvalho, Loboda & da Silva 2016 (whiptail stingrays) Subfamily Potamotrygoninae Garman 1877 (river stingrays) Family Urotrygonidae McEachran, Dunn & Miyake, 1996 (American round stingrays) Family Gymnuridae Fowler, 1934 (butterfly rays) Family Plesiobatidae K. Nishida, 1990 (deepwater stingrays or giant stingarees) Family Urolophidae J. P. Müller & Henle 1841 (round stingrays or stingarees) Family Aetobatidae Agassiz, 1858 (pelagic eagle rays) Family Myliobatidae Bonaparte, 1835 (eagle rays) Family Rhinopteridae D. S, Jordan & Evermann, 1896 (cownose rays) Family Mobulidae Gill, 1893 (mantas or devil rays) Conservation According to a 2021 study in Nature, the number of oceanic sharks and rays has declined globally by 71% over the preceding 50 years, jeopardising "the health of entire ocean ecosystems as well as food security for some of the world's poorest countries". Overfishing has increased the global extinction risk of these species to the point where three-quarters are now threatened with extinction. This is notably the case in the Mediterranean Sea - most impacted by unregulated fishing - where a recent international survey of the Mediterranean Science Commission concluded that only 38 species of rays and skates still subsisted. Differences between sharks and rays All sharks and rays are cartilaginous fish, contrasting with bony fishes. Many rays are adapted for feeding on the bottom. Guitarfishes are somewhat between sharks and rays, displaying characteristics of both (though they are classified as rays).
Biology and health sciences
Batoidea
null
41819039
https://en.wikipedia.org/wiki/Kotlin%20%28programming%20language%29
Kotlin (programming language)
Kotlin () is a cross-platform, statically typed, general-purpose high-level programming language with type inference. Kotlin is designed to interoperate fully with Java, and the JVM version of Kotlin's standard library depends on the Java Class Library, but type inference allows its syntax to be more concise. Kotlin mainly targets the JVM, but also compiles to JavaScript (e.g., for frontend web applications using React) or native code via LLVM (e.g., for native iOS apps sharing business logic with Android apps). Language development costs are borne by JetBrains, while the Kotlin Foundation protects the Kotlin trademark. On 7 May 2019, Google announced that the Kotlin programming language had become its preferred language for Android app developers. Since the release of Android Studio 3.0 in October 2017, Kotlin has been included as an alternative to the standard Java compiler. The Android Kotlin compiler emits Java 8 bytecode by default (which runs in any later JVM), but allows targeting Java 9 up to 20, for optimizing, or allows for more features; has bidirectional record class interoperability support for JVM, introduced in Java 16, considered stable as of Kotlin 1.5. Kotlin has support for the web with Kotlin/JS, through an intermediate representation-based backend which has been declared stable since version 1.8, released December 2022. Kotlin/Native (for e.g. Apple silicon support) has been declared stable since version 1.9.20, released November 2023. History Name The name is derived from Kotlin Island, a Russian island in the Gulf of Finland, near Saint Petersburg. Andrey Breslav, Kotlin's former lead designer, mentioned that the team decided to name it after an island, in imitation of the Java programming language which shares a name with the Indonesian island of Java. Development In July 2011, JetBrains unveiled Project Kotlin, a new language for the JVM, which had been under development for a year. JetBrains lead Dmitry Jemerov said that most languages did not have the features they were looking for, with the exception of Scala. However, he cited the slow compilation time of Scala as a deficiency. One of the stated goals of Kotlin is to compile as quickly as Java. In February 2012, JetBrains open sourced the project under the Apache 2 license. JetBrains hoped that the new language would drive IntelliJ IDEA sales. The first commit to the Kotlin Git repository was on November 8, 2010. Kotlin 1.0 was released on February 15, 2016. This is considered to be the first officially stable release and JetBrains has committed to long-term backwards compatibility starting with this version. At Google I/O 2017, Google announced first-class support for Kotlin on Android. Kotlin 1.2 was released on November 28, 2017. Sharing code between JVM and JavaScript platforms feature was newly added to this release (multiplatform programming is by now a beta feature upgraded from "experimental"). A full-stack demo has been made with the new Kotlin/JS Gradle Plugin. Kotlin 1.3 was released on 29 October 2018, adding support for coroutines for use with asynchronous programming. On 7 May 2019, Google announced that the Kotlin programming language is now its preferred language for Android app developers. Kotlin 1.4 was released in August 2020, with e.g. some slight changes to the support for Apple's platforms, i.e. to the Objective-C/Swift interop. Kotlin 1.5 was released in May 2021. Kotlin 1.6 was released in November 2021. Kotlin 1.7 was released in June 2022, including the alpha version of the new Kotlin K2 compiler. Kotlin 1.8 was released in December 2022, 1.8.0 was released on January 11, 2023. Kotlin 1.9 was released in July 2023, 1.9.0 was released on July 6, 2023. Kotlin 2.0 was released in May 2024, 2.0.0 was released on May 21, 2024. Design Development lead Andrey Breslav has said that Kotlin is designed to be an industrial-strength object-oriented language, and a "better language" than Java, but still be fully interoperable with Java code, allowing companies to make a gradual migration from Java to Kotlin. Borrowing from Scala, semicolons are optional as a statement terminator; in most cases a newline is sufficient for the compiler to deduce that the statement has ended. Borrowing from Scala, Kotlin variable declarations and parameter lists have the data type come after the variable name (and with a colon separator), similar to Ada, BASIC, Pascal, TypeScript and Rust. This, according to an article from Roman Elizarov, current project lead, results in alignment of variable names and is more pleasing to eyes, especially when there are a few variable declarations in succession, and one or more of the types is too complex for type inference, or needs to be declared explicitly for human readers to understand. Borrowing from Scala, variables in Kotlin can be read-only, declared with the keyword, or mutable, declared with the keyword. Borrowing from Scala, class members are public by default, and classes themselves are final by default, meaning that creating a derived class is disabled unless the base class is declared with the keyword. In addition to the classes and member functions (which are equivalent to methods) of object-oriented programming, Kotlin also supports procedural programming with the use of functions. Kotlin functions and constructors support default arguments, variable-length argument lists, named arguments, and overloading by unique signature. Class member functions are virtual, i.e. dispatched based on the runtime type of the object they are called on. Kotlin 1.3 added support for contracts, which are stable for the standard library declarations, but still experimental for user-defined declarations. Contracts are inspired by Eiffel's design by contract programming paradigm. Following ScalaJS, Kotlin code may be transpiled to JavaScript, allowing for interoperability between code written in the two languages. This can be used either to write full web applications in Kotlin, or to share code between a Kotlin backend and a JavaScript frontend. Syntax Procedural programming style Kotlin relaxes Java's restriction of allowing static methods and variables to exist only within a class body. Static objects and functions can be defined at the top level of the package without needing a redundant class level. For compatibility with Java, Kotlin provides a JvmName annotation which specifies a class name used when the package is viewed from a Java project. For example, @file:JvmName("JavaClassName"). Main entry point As in C, C++, C#, Java, and Go, the entry point to a Kotlin program is a function named "main", which may be passed an array containing any command-line arguments. This is optional since Kotlin 1.3. Perl, PHP, and Unix shell–style string interpolation is supported. Type inference is also supported. // Hello, World! example fun main() { val scope = "World" println("Hello, $scope!") } fun main(args: Array<String>) { for (arg in args) println(arg) } Extension functions Similar to C#, Kotlin allows adding an extension function to any class without the formalities of creating a derived class with new functions. An extension function has access to all the public interface of a class, which it can use to create a new function interface to a target class. An extension function will appear exactly like a function of the class and will be shown in code completion inspection of class functions. For example: package com.example.myStringExtensions fun String.lastChar(): Char = get(length - 1) >>> println("Kotlin".lastChar()) By placing the preceding code in the top-level of a package, the String class is extended to include a function that was not included in the original definition of the String class. // Overloading '+' operator using an extension function operator fun Point.plus(other: Point): Point { return Point(x + other.x, y + other.y) } >>> val p1 = Point(10, 20) >>> val p2 = Point(30, 40) >>> println(p1 + p2) Point(x=40, y=60) Scope functions Kotlin has five scope functions, which allow the changing of scope within the context of an object. The scope functions are , , , , and . Unpack arguments with spread operator Similar to Python, the spread operator asterisk (*) unpacks an array's contents as individual arguments to a function, e.g.: fun main(args: Array<String>) { val list = listOf("args: ", *args) println(list) } Destructuring declarations Destructuring declarations decompose an object into multiple variables at once, e.g. a 2D coordinate object might be destructured into two integers, and . For example, the object supports destructuring to simplify access to its key and value fields: for ((key, value) in map) println("$key: $value") Nested functions Kotlin allows local functions to be declared inside of other functions or methods. class User(val id: Int, val name: String, val address: String) fun saveUserToDb(user: User) { fun validate(user: User, value: String, fieldName: String) { require(value.isNotEmpty()) { "Can't save user ${user.id}: empty $fieldName" } } validate(user, user.name, "Name") validate(user, user.address, "Address") // Save user to the database ... } Classes are final by default In Kotlin, to derive a new class from a base class type, the base class needs to be explicitly marked as "open". This is in contrast to most object-oriented languages such as Java where classes are open by default. Example of a base class that is open to deriving a new subclass from it: // open on the class means this class will allow derived classes open class MegaButton { // no-open on a function means that // polymorphic behavior disabled if function overridden in derived class fun disable() { ... } // open on a function means that // polymorphic behavior allowed if function is overridden in derived class open fun animate() { ... } } class GigaButton: MegaButton() { // Explicit use of override keyword required to override a function in derived class override fun animate() { println("Giga Click!") } } Abstract classes are open by default Abstract classes define abstract or "pure virtual" placeholder functions that will be defined in a derived class. Abstract classes are open by default. // No need for the open keyword here, it’s already open by default abstract class Animated { // This virtual function is already open by default as well abstract fun animate() open fun stopAnimating() { } fun animateTwice() { } } Classes are public by default Kotlin provides the following keywords to restrict visibility for top-level declaration, such as classes, and for class members: public, internal, protected, and private. When applied to a class member: When applied to a top-level declaration: Example: // Class is visible only to current module internal open class TalkativeButton { // method is only visible to current class private fun yell() = println("Hey!") // method is visible to current class and derived classes protected fun whisper() = println("Let's talk!") } internal class MyTalkativeButton: TalkativeButton() { fun utter() = super.whisper() } MyTalkativeButton().utter() Primary constructor vs. secondary constructors Kotlin supports the specification of a "primary constructor" as part of the class definition itself, consisting of an argument list following the class name. This argument list supports an expanded syntax on Kotlin's standard function argument lists that enables declaration of class properties in the primary constructor, including visibility, extensibility, and mutability attributes. Additionally, when defining a subclass, properties in super-interfaces and super-classes can be overridden in the primary constructor. // Example of class using primary constructor syntax // (Only one constructor required for this class) open class BaseUser(open var isSubscribed: Boolean) open class PowerUser(protected val nickname: String, final override var isSubscribed: Boolean = true):BaseUser(isSubscribed) { } However, in cases where more than one constructor is needed for a class, a more general constructor can be defined using secondary constructor syntax, which closely resembles the constructor syntax used in most object-oriented languages like C++, C#, and Java. // Example of class using secondary constructor syntax // (more than one constructor required for this class) class Context class AttributeSet open class View(ctx:Context) { constructor(ctx: Context, attr: AttributeSet): this(ctx) } class MyButton : View { // Constructor #1 constructor(ctx: Context) : super(ctx) { } // Constructor #2 constructor(ctx: Context, attr: AttributeSet) : super(ctx, attr) { // ... } } Sealed classes Sealed classes and interfaces restrict subclass hierarchies, meaning more control over the inheritance hierarchy. Declaration of sealed interface and class: sealed interface Expr sealed class Job All the subclasses of the sealed class are defined at compile time. No new subclasses can be added to it after the compilation of the module having the sealed class. For example, a sealed class in a compiled jar file cannot be subclassed. sealed class Vehicle data class Car(val brandName: String, val owner: String, val color: String): Vehicle() class Bike(val brandName: String, val owner: String, val color: String): Vehicle() class Tractor(val brandName: String, val owner: String, val color: String): Vehicle() val kiaCar = Car("KIA", "John", "Blue") val hyundaiCar = Car("Hyundai", "Britto", "Green") Data classes Kotlin's data class construct defines classes whose primary purpose is storing data, similar Java's record types. Like Java's record types, the construct is similar to normal classes except that the key methods equals, hashCode and toString are automatically generated from the class properties. Unlike Java's records, data classes are open for inheritance. Kotlin interactive shell $ kotlinc-jvm type :help for help; :quit for quit >>> 2 + 2 4 >>> println("Hello, World!") Hello, World! Kotlin as a scripting language Kotlin can also be used as a scripting language. A script is a Kotlin source file using the filename extension, with executable source code at the top-level scope: // list_folders.kts import java.io.File val folders = File(args[0]).listFiles { file -> file.isDirectory() } folders?.forEach(::println) Scripts can be run by passing the -script option and the corresponding script file to the compiler. $ kotlinc -script list_folders.kts "path_to_folder_to_inspect" Null safety Kotlin makes a distinction between nullable and non-nullable data types. All nullable objects must be declared with a "?" postfix after the type name. Operations on nullable objects need special care from developers: a null-check must be performed before using the value, either explicitly, or with the aid of Kotlin's null-safe operators: (the safe navigation operator) can be used to safely access a method or property of a possibly null object. If the object is null, the method will not be called and the expression evaluates to null. Example: (the null coalescing operator) is a binary operator that returns the first operand, if non-null, else the second operand. It is often referred to as the Elvis operator, due to its resemblance to an emoticon representation of Elvis Presley. Lambdas Kotlin provides support for higher-order functions and anonymous functions, or lambdas. // the following function takes a lambda, f, and executes f passing it the string "lambda" // note that (String) -> Unit indicates a lambda with a String parameter and Unit return type fun executeLambda(f: (String) -> Unit) { f("lambda") } Lambdas are declared using braces, . If a lambda takes parameters, they are declared within the braces and followed by the operator. // the following statement defines a lambda that takes a single parameter and passes it to the println function val l = { c : Any? -> println(c) } // lambdas with no parameters may simply be defined using { } val l2 = { print("no parameters") } "Hello world" example (Taken from and explained at https://kotlinlang.org/docs/kotlin-tour-hello-world.html.) fun main() { println("Hello, world!") // Hello, world! } Tools Android Studio (based on IntelliJ IDEA) has official support for Kotlin, starting from Android Studio 3. Integration with common Java build tools is supported, including Apache Maven, Apache Ant, and Gradle. Emacs has a Kotlin Mode in its MELPA package repository. JetBrains also provides a plugin for Eclipse. IntelliJ IDEA has plug-in support for Kotlin. IntelliJ IDEA 15 was the first version to bundle the Kotlin plugin in the IntelliJ Installer, and to provide Kotlin support out of the box. Gradle: Kotlin has seamless integration with Gradle, which is a popular build automation tool. Gradle allows you to build, automate, and manage the lifecycle of your Kotlin projects efficiently Applications When Kotlin was announced as an official Android development language at Google I/O in May 2017, it became the third language fully supported for Android, after Java and C++. , Kotlin is the most widely used language on Android, with Google estimating that 70% of the top 1,000 apps on the Play Store are written in Kotlin. Google itself has 60 apps written in Kotlin, including Maps and Drive. Many Android apps, such as Google Home, are in the process of being migrated to Kotlin, and therefore use both Kotlin and Java. Kotlin on Android is seen as beneficial for its null-pointer safety, as well as for its features that make for shorter, more readable code. In addition to its prominent use on Android, Kotlin is gaining traction in server-side development. The Spring Framework officially added Kotlin support with version 5, on 4 January 2017. To further support Kotlin, Spring has translated all its documentation to Kotlin, and added built-in support for many Kotlin-specific features such as coroutines. In addition to Spring, JetBrains has produced a Kotlin-first framework called Ktor for building web applications. In 2020, JetBrains found in a survey of developers who use Kotlin that 56% were using Kotlin for mobile apps, while 47% were using it for a web back-end. Just over a third of all Kotlin developers said that they were migrating to Kotlin from another language. Most Kotlin users were targeting Android (or otherwise on the JVM), with only 6% using Kotlin Native. Adoption In 2018, Kotlin was the fastest growing language on GitHub, with 2.6 times more developers compared to 2017. It is the fourth most loved programming language according to the 2020 Stack Overflow Developer Survey. Kotlin was also awarded the O'Reilly Open Source Software Conference Breakout Award for 2019. Many companies/organizations have used Kotlin for backend development: Allegro Amazon Atlassian Cash App Flux Google Gradle JetBrains Meshcloud Norwegian Tax Administration OLX Pivotal Rocket Travel Shazam Zalando Some companies/organizations have used Kotlin for web development: Barclay's Bank Data2viz Fritz2 JetBrains A number of companies have publicly stated they were using Kotlin: Basecamp Corda, a distributed ledger developed by a consortium of well-known banks (such as Goldman Sachs, Wells Fargo, J.P. Morgan, Deutsche Bank, UBS, HSBC, BNP Paribas, and Société Générale), has over 90% Kotlin code in its codebase. Coursera DripStat Duolingo Netflix Pinterest Trello Uber
Technology
Programming languages
null
24951890
https://en.wikipedia.org/wiki/Oceanic%20zone
Oceanic zone
The oceanic zone is typically defined as the area of the ocean lying beyond the continental shelf (e.g. the neritic zone), but operationally is often referred to as beginning where the water depths drop to below , seaward from the coast into the open ocean with its pelagic zone. It is the region of open sea beyond the edge of the continental shelf and includes 65% of the ocean's completely open water. The oceanic zone has a wide array of undersea terrain, including trenches that are often deeper than Mount Everest is tall, as well as deep-sea volcanoes and basins. While it is often difficult for life to sustain itself in this type of environment, many species have adapted and do thrive in the oceanic zone. The open ocean is vertically divided into four zones: the sunlight zone, twilight zone, midnight zone, and abyssal zone. Sub zones The Mesopelagic (disphotic) zone, which is where only small amounts of light penetrate, lies below the Epipelagic zone. This zone is often referred to as the "Twilight Zone" due to its scarce amount of light. Temperatures in the Mesopelagic zone range from . The pressure is higher here, it can be up to and increases with depth. 54% of the ocean lies in the Bathypelagic (aphotic) zone into which no light penetrates. This is also called the midnight zone and the deep ocean. Due to the complete lack of sunlight, photosynthesis cannot occur and the only light source is bioluminescence. Water pressure is very intense and the temperatures are near freezing (range ). Marine life Oceanographers have divided the ocean into zones based on how far light reaches. All of the light zones can be found in the oceanic zone. The epipelagic zone is the one closest to the surface and is the best lit. It extends to 100 meters and contains both phytoplankton and zooplankton that can support larger organisms like marine mammals and some types of fish. Past 100 meters, not enough light penetrates the water to support life, and no plant life exists. There are creatures, however, which thrive around hydrothermal vents, or geysers located on the ocean floor that expel superheated water that is rich in minerals. These organisms feed off of chemosynthetic bacteria, which use the superheated water and chemicals from the hydrothermal vents to create energy in place of photosynthesis. The existence of these bacteria allow creatures like squids, hatchet fish, octopuses, tube worms, giant clams, spider crabs and other organisms to survive. Due to the total darkness in the zones past the epipelagic zone, many organisms that survive in the deep oceans do not have eyes, and other organisms make their own light with bioluminescence. Often the light is blue-green in color, because many marine organisms are sensitive to blue light. Two chemicals, luciferin, and luciferase that react with one another to create a soft glow. The process by which bioluminescence is created is very similar to what happens when a glow stick is broken. Deep-sea organisms use bioluminescence for everything from luring prey to navigation. Animals such as fish, whales, and sharks are found in the oceanic zone.
Physical sciences
Oceanography
Earth science
40408388
https://en.wikipedia.org/wiki/Fast%20radio%20burst
Fast radio burst
In radio astronomy, a fast radio burst (FRB) is a transient radio pulse of length ranging from a fraction of a millisecond, for an ultra-fast radio burst, to 3 seconds, caused by some high-energy astrophysical process not yet understood. Astronomers estimate the average FRB releases as much energy in a millisecond as the Sun puts out in three days. While extremely energetic at their source, the strength of the signal reaching Earth has been described as 1,000 times less than from a mobile phone on the Moon. The first FRB was discovered by Duncan Lorimer and his student David Narkevic in 2007 when they were looking through archival pulsar survey data, and it is therefore commonly referred to as the Lorimer Burst. Many FRBs have since been recorded, including several that have been detected to repeat in seemingly irregular ways. Only one FRB has been detected to repeat in a regular way: FRB 180916 seems to pulse every 16.35 days. Most FRBs are extragalactic, but the first Milky Way FRB was detected by the CHIME radio telescope in April 2020. In June 2021, astronomers reported over 500 FRBs from outer space detected in one year. When the FRBs are polarized, it indicates that they are emitted from a source contained within an extremely powerful magnetic field. The exact origin and cause of the FRBs is still the subject of investigation; proposals for their origin range from a rapidly rotating neutron star and a black hole, to extraterrestrial intelligence. In 2020, astronomers reported narrowing down a source of fast radio bursts, which may now plausibly include "compact-object mergers and magnetars arising from normal core collapse supernovae". A neutron star has been proposed as the origin of an unusual FRB with periodic peaks lasting over 3 seconds reported in 2022. The discovery in 2012 of the first repeating source, FRB 121102, and its localization and characterization in 2017, has improved the understanding of the source class. FRB 121102 is identified with a galaxy at a distance of approximately three billion light-years and is embedded in an extreme environment. The first host galaxy identified for a non-repeating burst, FRB 180924, was identified in 2019 and is a much larger and more ordinary galaxy, nearly the size of the Milky Way. In August 2019, astronomers reported the detection of eight more repeating FRB signals. In January 2020, astronomers reported the precise location of a second repeating burst, FRB 180916. One FRB seems to have been in the same location as a known gamma-ray burst. On 28 April 2020, a pair of millisecond-timescale bursts (FRB 200428) consistent with observed fast radio bursts, with a fluence of >1.5 million Jy ms, was detected from the same area of sky as the magnetar SGR 1935+2154. Although it was thousands of times less intrinsically bright than previously observed fast radio bursts, its comparative proximity rendered it the most powerful fast radio burst yet observed, reaching a peak flux of either a few thousand or several hundred thousand janskys, comparable to the brightness of the radio sources Cassiopeia A and Cygnus A at the same frequencies. This established magnetars as, at least, one ultimate source of fast radio bursts, although the exact cause remains unknown. Further studies support the notion that magnetars may be closely associated with FRBs. On 13 October 2021, astronomers reported the detection of hundreds of FRBs from a single system. In 2024, an international team led by astrophysicists of INAF, using detections from VLA, NOEMA interferometer, and Gran Telescopio Canarias has conducted a research campaign about FRB20201124A, one of the two known persistent FRB, located about 1.3 billion light-years away. Based on the outcomes of the study, authors deem to confirm the origin of FRBs in a binary system at high accretion rate, that would blow a plasma bubble, responsible for the persistent radio emission. The emission object, i.e. the "bubble", would be immersed in a star-forming region. Detection The first fast radio burst to be described, the Lorimer Burst FRB 010724, was found in 2007 in archived data recorded by the Parkes Observatory on 24 July 2001. Since then, many FRBs have been found in previously recorded data. On 19 January 2015, astronomers at Australia's national science agency (CSIRO) reported that a fast radio burst had been observed for the first time live, by the Parkes Observatory. Many FRBs have been detected in real time by the CHIME radio telescope since it became operational in 2018, including the first FRB detected from within the Milky Way in April 2020. Features Fast radio bursts are bright, unresolved (pointsource-like), broadband (spanning a large range of radio frequencies), millisecond flashes found in parts of the sky. Unlike many radio sources, the signal from a burst is detected in a short period of time with enough strength to stand out from the noise floor. The burst usually appears as a single spike of energy without any change in its strength over time. The bursts last for several milliseconds (thousandths of a second). The bursts come from all over the sky, and are not concentrated on the plane of the Milky Way. Known FRB locations are biased by the parts of the sky that the observatories can image. Many have radio frequencies detected around 1400 MHz; a few have been detected at lower frequencies in the range of 400–800 MHz. The component frequencies of each burst are delayed by different amounts of time depending on the wavelength. This delay is described by a value referred to as a dispersion measure (DM). This results in a received signal that sweeps rapidly down in frequency, as longer wavelengths are delayed more. Extragalactic origin The interferometer UTMOST has put a lower limit of 10,000 kilometers for the distance to the FRBs it has detected, supporting the case for an astronomical, rather than terrestrial, origin (because signal sources on Earth are ruled out as being closer than this limit). This limit can be determined from the fact that closer sources would have a curved wave front that could be detected by the multiple antennas of the interferometer. Fast radio bursts have pulse dispersion measurements , much larger than expected for a source inside the Milky Way galaxy and consistent with propagation through an ionized plasma. Furthermore, their distribution is isotropic (not especially coming from the galactic plane); consequently they are conjectured to be of extragalactic origin. Origin hypotheses Because of the isolated nature of the observed phenomenon, the nature of the source remains speculative. , there is no generally accepted single explanation, although a magnetar has been identified as a possible source. The sources are thought to be a few hundred kilometers or less in size, as the bursts last for only a few milliseconds. Causation is limited by the speed of light, about 300 km per millisecond, so if the sources were larger than about 1000 km, a complex synchronization mechanism would be required for the bursts to be so short. If the bursts come from cosmological distances, their sources must be very energetic. One possible explanation would be a collision between very dense objects like merging black holes or neutron stars. It has been suggested that there is a connection to gamma-ray bursts. Some have speculated that these signals might be artificial in origin, that they may be signs of extraterrestrial intelligence, demonstrating veritable technosignatures. Analogously, when the first pulsar was discovered, it was thought that the fast, regular pulses could possibly originate from a distant civilization, and the source nicknamed "LGM-1" (for "little green men"). In 2007, just after the publication of the e-print with the first discovery, it was proposed that fast radio bursts could be related to hyperflares of magnetars. In 2015 three studies supported the magnetar hypothesis. The identification of first FRB from the Milky Way, which originated from the magnetar SGR 1935+2154, indicates that magnetars may be one source of FRB. Especially energetic supernovae could be the source of these bursts. Blitzars were proposed in 2013 as an explanation. In 2014 it was suggested that following dark matter-induced collapse of pulsars, the resulting expulsion of the pulsar magnetospheres could be the source of fast radio bursts. In 2015 it was suggested that FRBs are caused by explosive decays of axion miniclusters. Another exotic possible source are cosmic strings that produced these bursts as they interacted with the plasma that permeated the early Universe. In 2016 the collapse of the magnetospheres of Kerr–Newman black holes were proposed to explain the origin of the FRBs' "afterglow" and the weak gamma-ray transient 0.4 s after GW 150914. It has also been proposed that if fast radio bursts originate in black hole explosions, FRBs would be the first detection of quantum gravity effects. In early 2017, it was proposed that the strong magnetic field near a supermassive black hole could destabilize the current sheets within a pulsar's magnetosphere, releasing trapped energy to power the FRBs. Hypotheses for repeating FRBs Repeated bursts of FRB 121102 have initiated multiple origin hypotheses. A coherent emission phenomenon known as superradiance, which involves large-scale entangled quantum mechanical states possibly arising in environments such as active galactic nuclei, has been proposed to explain these and other associated observations with FRBs (e.g. high event rate, repeatability, variable intensity profiles). In July 2019, astronomers reported that non-repeating Fast Radio Bursts may not be one-off events, but actually FRB repeaters with repeat events that have gone undetected and, further, that FRBs may be formed by events that have not yet been seen or considered. Additional possibilities include that FRBs may originate from nearby stellar flares. A FRB with multiple periodic component peaks lasting over 3 seconds was reported in 2022. A neutron star has been proposed as the origin of this FRB. Bursts observed Naming Fast radio bursts are named by the date the signal was recorded, as "FRB YYMMDD", with a letter appended to distinguish multiple sources first recorded on the same date. The name is of the presumed source rather than the burst of radio waves, so repeated or subsequent bursts from the same apparent location (eg, FRB 121102) do not get new date names. 2007 (Lorimer Burst) The first FRB detected, the Lorimer Burst FRB 010724, was discovered in 2007 when Duncan Lorimer of West Virginia University assigned his student David Narkevic to look through archival data taken in 2001 by the Parkes radio dish in Australia. Analysis of the survey data found a 30-jansky dispersed burst which occurred on 24 July 2001, less than 5 milliseconds in duration, located 3° from the Small Magellanic Cloud. The reported burst properties argue against a physical association with the Milky Way galaxy or the Small Magellanic Cloud. The burst became known as the Lorimer Burst. The discoverers argue that current models for the free electron content in the Universe imply that the burst is less than 1 gigaparsec distant. The fact that no further bursts were seen in 90 hours of additional observations implies that it was a singular event such as a supernova or merger of relativistic objects. It is suggested that hundreds of similar events could occur every day and if detected could serve as cosmological probes. 2010 In 2010 there was a report of 16 similar pulses, clearly of terrestrial origin, detected by the Parkes radio telescope and given the name perytons. In 2015 perytons were shown to be generated when microwave oven doors were opened during a heating cycle, with detected emission being generated by the microwave oven's magnetron tube as it was being powered off. 2011 In 2015, FRB 110523 was discovered in archival data collected in 2011 from the Green Bank Telescope. It was the first FRB for which linear polarization was detected (allowing a measurement of Faraday rotation). Measurement of the signal's dispersion delay suggested that this burst was of extragalactic origin, possibly up to 6 billion light-years away. 2012 Victoria Kaspi of McGill University estimated that as many as 10,000 fast radio bursts may occur per day over the entire sky. FRB 121102 An observation in 2012 of a fast radio burst (FRB 121102) in the direction of Auriga in the northern hemisphere using the Arecibo radio telescope confirmed the extragalactic origin of fast radio pulses by an effect known as plasma dispersion. In November 2015, astronomer Paul Scholz at McGill University in Canada, found ten non-periodically repeated fast radio pulses in archival data gathered in May and June 2015 by the Arecibo radio telescope. The ten bursts have dispersion measures and sky positions consistent with the original burst FRB 121102, detected in 2012. Like the 2012 burst, the 10 bursts have a plasma dispersion measure that is three times larger than possible for a source in the Milky Way Galaxy. The team thinks that this finding rules out self-destructive, cataclysmic events that could occur only once, such as the collision between two neutron stars. According to the scientists, the data support an origin in a young rotating neutron star (pulsar), or in a highly magnetized neutron star (magnetar), or from highly magnetized pulsars travelling through asteroid belts, or from an intermittent Roche lobe overflow in a neutron star-white dwarf binary. On 16 December 2016 six new FRBs were reported in the same direction (one having been received on 13 November 2015, four on 19 November 2015, and one on 8 December 2015). this is one of only two instances in which these signals have been found twice in the same location in space. FRB 121102 is located at least 1150 AU from Earth, excluding the possibility of a human-made source, and is almost certainly extragalactic in nature. As of April 2018, FRB 121102 is thought to be co-located in a dwarf galaxy about three billion light-years from Earth with a low-luminosity active galactic nucleus, or a previously unknown type of extragalactic source, or a young neutron star energising a supernova remnant. On 26 August 2017, astronomers using data from the Green Bank Telescope detected 15 additional repeating FRBs coming from FRB 121102 at 5 to 8 GHz. The researchers also noted that FRB 121102 is presently in a "heightened activity state, and follow-on observations are encouraged, particularly at higher radio frequencies". The waves are highly polarized and undergoes Faraday rotation, meaning "twisting" transverse waves, that could have formed only when passing through hot plasma with an extremely strong magnetic field. This rotation of polarized light is quantified by Rotation Measure (RM). FRB 121102's radio bursts have RM about 500 times higher than those from any other FRB to date. Since it is a repeating FRB source, it suggests that it does not come from some one-time cataclysmic event; so one hypothesis, first advanced in January 2018, proposes that these particular repeating bursts may come from a dense stellar core called a neutron star near an extremely powerful magnetic field, such as one near a massive black hole, or one embedded in a nebula. In April 2018, it was reported that FRB 121102 consisted of 21 bursts spanning one hour. In September 2018, an additional 72 bursts spanning five hours had been detected using a convolutional neural network. In September 2019, more repeating signals, 20 pulses on 3 September 2019, were reported to have been detected from FRB 121102 by the Five-hundred-meter Aperture Spherical Telescope (FAST). In June 2020, astronomers from Jodrell Bank Observatory reported that FRB 121102 exhibits the same radio-burst behavior ("radio bursts observed in a window lasting approximately 90 days followed by a silent period of 67 days") every 157 days, suggesting that the bursts may be associated with "the orbital motion of a massive star, a neutron star or a black hole". Subsequent studies by FAST of further activity, consisting of 12 bursts within two hours observed on 17 August 2020, supports an updated refined periodicity between active periods of 156.1 days. Related studies have been reported in October 2021. Further bursts, at least 300, were detected by FAST in August and September 2022. Further related studies were reported in April 2023. In July 2023 19 new burst were reported from existing observations of 121102A that were taken by the Green Bank Telescope, eight of which were extremely short, independent, bursts lasting between 5 and 15 microseconds, the shortest so far detected. 2013 In 2013, four bursts were identified that supported the likelihood of extragalactic sources. 2014 In 2014, FRB 140514 was caught 'live' and was found to be 21% (±7%) circularly polarised. 2015 FRB 150418 On 18 April 2015, FRB 150418 was detected by the Parkes observatory and within hours, several telescopes including the Australia Telescope Compact Array caught an apparent radio "afterglow" of the flash, which took six days to fade. The Subaru Telescope was used to find what was thought to be the host galaxy and determine its redshift and the implied distance to the burst. However, the association of the burst with the afterglow was soon disputed, and by April 2016 it was established that the "afterglow" originated from an active galactic nucleus (AGN) that is powered by a supermassive black hole with dual jets blasting outward from the black hole. It was also noted that what was thought to be an afterglow did not fade away as would be expected, supporting the interpretation that it originated in the variable AGN and was not associated with the fast radio burst. 2017 The upgraded Molonglo Observatory Synthesis Telescope (UTMOST), near Canberra (Australia), reported finding three more FRBs. A 180-day three-part survey in 2015 and 2016 found three FRBs at 843 MHz. Each FRB located with a narrow elliptical 'beam'; the relatively narrow band 828–858 MHz gives a less precise dispersion measure (DM). A short survey using part of Australian Square Kilometre Array Pathfinder (ASKAP) found one FRB in 3.4 days. FRB170107 was bright with a fluence of 58±6 Jy ms. According to Anastasia Fialkov and Abraham Loeb, FRB's could be occurring as often as once per second. Earlier research could not identify the occurrence of FRB's to this degree. 2018 Three FRBs were reported in March 2018 by Parkes Observatory in Australia. One (FRB 180309) had the highest signal-to-noise ratio yet seen of 411. The unusual CHIME (Canadian Hydrogen Intensity Mapping Experiment) radio telescope, operational from September 2018, can be used to detect "hundreds" of fast radio bursts as a secondary objective to its cosmological observations. FRB 180725A was reported by CHIME as the first detection of a FRB under 700 MHz – as low as 580 MHz. In October 2018, astronomers reported 19 more new non-repeating FRB bursts detected by the Australian Square Kilometre Array Pathfinder (ASKAP). These included three with dispersion measure (DM) smaller than seen before : FRB 171020 (DM=114.1), FRB 171213 (DM=158.6), FRB 180212 (DM=167.5). FRB 180814 On 9 January 2019, astronomers announced the discovery of a second repeating FRB source, named FRB 180814, by CHIME. Six bursts were detected between August and October 2018, "consistent with originating from a single position on the sky". The detection was made during CHIME's pre-commissioning phase, during which it operated intermittently, suggesting a "substantial population of repeating FRBs", and that the new telescope would make more detections. Some news media reporting of the discovery speculated that the repeating FRB could be evidence of extraterrestrial intelligence, a possibility explored in relation to previous FRBs by some scientists, but not raised by the discoverers of FRB 180814. FRB 180916 FRB 180916, more formally FRB 180916.J0158+65, is a repeating FRB discovered by CHIME, that later studies found to have originated from a medium-sized spiral galaxy (SDSS J015800.28+654253.0) about 500 million light-years away – the closest FRB discovered to date. It is also the first FRB observed to have a regular periodicity. Bursts are clustered into a period of about four days, followed by a dormant period of about 12 days, for a total cycle length of days. Additional followup studies of the repeating FRB by the Swift XRT and UVOT instruments were reported on 4 February 2020; by the Sardinia Radio Telescope (SRT) and Medicina Northern Cross Radio Telescope (MNC), on 17 February 2020; and, by the Galileo telescope in Asiago, also on 17 February 2020. Further observations were made by the Chandra X-ray Observatory on 3 and 18 December 2019, with no significant x-ray emissions detected at the FRB 180916 location, or from the host galaxy SDSS J015800.28+654253.0. On 6 April 2020, followup studies by the Global MASTER-Net were reported on The Astronomer's Telegram. On 25 August 2021, further observations were reported. FRB 181112 FRB 181112 was mysteriously unaffected after believed to have passed through the halo of an intervening galaxy. 2019 FRB 180924 FRB 180924 is the first non-repeating FRB to be traced to its source. The source is a galaxy 3.6 billion light-years away. The galaxy is nearly as large as the Milky Way and about 1000 times larger than the source galaxy of FRB 121102. While the latter is an active site of star formation and a likely place for magnetars, the source of FRB 180924 is an older and less active galaxy. Because the FRB was nonrepeating, the astronomers had to scan large areas with the 36 telescopes of ASKAP. Once a signal was found, they used the Very Large Telescope, the Gemini Observatory in Chile, and the W. M. Keck Observatory in Hawaii to identify its host galaxy and determine its distance. Knowledge of the distance and source galaxy properties enables a study of the composition of the intergalactic medium. June 2019 On 28 June 2019, Russian astronomers reported the discovery of nine FRB events (FRB 121029, FRB 131030, FRB 140212, FRB 141216, FRB 151125.1, FRB 151125.2, FRB 160206, FRB 161202, FRB 180321), which include FRB 151125, the third repeating one ever detected, from the direction of the M 31 (Andromeda Galaxy) and M 33 (Triangulum Galaxy) galaxies during the analysis of archive data (July 2012 to December 2018) produced by the BSA/LPI large phased array radio telescope at the Pushchino Radio Astronomy Observatory. FRB 190520 FRB 190520 was observed by the FAST telescope and was localized using the realfast system at the Karl G. Jansky Very Large Array (VLA). Optical observations using the Palomar 200-inch Hale Telescope revealed a host dwarf galaxy at redshift z=0.241. This is the second FRB observed to have an associated Persistent Radio Source (PRS). The dispersion measure(DM) and rotation measure measurements reveals a very dense, magnetized and turbulent environment local to the source. In June 2022, astronomers reported that FRB 20190520B was found to be another repeating FRB. On 12 May 2023, FRB 20190520B was reported to show multiple bursts indicating magnetic field reversal. FRB 190523 On 2 July 2019, astronomers reported that FRB 190523, a non-repeating FRB, has been discovered and, notably, localized to a few-arcsecond region containing a single massive galaxy at a redshift of 0.66, nearly 8 billion light-years away from Earth. August 2019 In August 2019, the CHIME Fast Radio Burst Collaboration reported the detection of eight more repeating FRB signals. FRB 191223 On 29 December 2019, Australian astronomers from the Molonglo Observatory Synthesis Telescope (MOST), using the UTMOST fast radio burst equipment, reported the detection of FRB 191223 in the Octans constellation (RA = 20:34:14.14, DEC = -75:08:54.19). FRB 191228 On 31 December 2019, Australian astronomers, using the Australian Square Kilometre Array Pathfinder (ASKAP), reported the detection of FRB 191228 in the Piscis Austrinus constellation (RA = 22:57(2), DEC = -29:46(40)). 2020 FRB 200120E In February & March 2022, astronomers reported that a globular cluster of M81, a grand design spiral galaxy about 12 million light-years away, may be the source of FRB 20200120E, a repeating fast radio burst. FRB 200317 Astronomers reported the discovery of FRB 20200317A (RA 16h22m45s, DEC p+56d44m50s) with FAST (Five-hundred-meter Aperture Spherical radio Telescope) in archival data on 22 September 2023. The detected FRB is "one of the faintest FRB sources detected so far", according to the report. FRB 200428 On 28 April 2020, astronomers at the Canadian Hydrogen Intensity Mapping Experiment (CHIME), reported the detection of a bright radio burst from the direction of the Galactic magnetar SGR 1935+2154 about 30,000 light years away in the Vulpecula constellation. The burst had a DM of 332.8 pc/cc. The STARE2 team independently detected the burst and reported that the burst had a fluence of >1.5 MJy ms, establishing the connection between this burst and FRBs at extragalactic distances. The burst was then referred to as FRB 200428. The detection is notable, as the STARE2 team claim it is the first ever FRB detected inside the Milky Way, and the first ever to be linked to a known source. That link strongly supports the idea that fast radio bursts emanate from magnetars. FRB 200610 On 10 January 2024, astronomers reported that the source of FRB 20200610A was a "rare 'blob-like' group of galaxies". FRBs 200914 and 200919 On 24 September 2020, astronomers reported the detection of two new FRBs, FRB200914 and FRB200919, by the Parkes Radio Telescope. Upper limits on low-frequency emission from FRB 200914 were later reported by the Square Kilometre Array radio telescope project. FRB 201124 On 31 March 2021, the CHIME/FRB Collaboration reported the detection of FRB 20201124A and related multiple bursts within the week of 23 March 2021 — designated as 20210323A, 20210326A, 20210327A, 20210327B, 20210327C, and 20210328A — and later, likely 20210401A and 20210402A. Further related observations were reported by other astronomers on 6 April 2021, 7 April 2021, and many more as well, including an "extremely bright" pulse on 15 April 2021. Source localization improvements were reported on 3 May 2021. Even more observations were reported in May 2021, including "two bright bursts". On 3 June 2021, the SETI Institute announced detecting "a bright double-peaked radio burst" from FRB 201124A on 18 May 2021. Further observations were made by the Neil Gehrels Swift Observatory on 28 July 2021 and 7 August 7, 2021 without detecting a source on either date. On 23 September 2021, 9 new bursts from FRB 20201124A were reported to have been observed with the Effelsberg 100-m Radio Telescope, followed by one CHIME observation, all after four months of no detections. In January and February 2022, further observations of new bursts from FRB 20201124A with the Westerbork-RT1 25-m telescope were also reported. In mid-March 2022, further observations of FRB 20201124 were reported. In September 2022, astronomers suggested that the repeating FRB 20201124A may originate from a magnetar/Be star binary. 2021 FRB 210401 On 2 and 3 April 2021, astronomers at the Australian Square Kilometre Array Pathfinder (ASKAP) reported the detection of FRB 20210401A and 20210402A which were understood likely to be repetitions of FRB 20201124A, a repeating FRB with recent very high burst activity, that was reported earlier by the CHIME/FRB collaboration. FRB 210630 On 30 June 2021, astronomers at the Molonglo Observatory Synthesis Telescope (UTMOST) detected FRB 210630A at the "likely" position of "RA = 17:23:07.4, DEC =+07:51:42, J2000". FRB 211211 On 15 December 2021, astronomers at the Neil Gehrels Swift Observatory reported further observations of the "bright CHIME FRB 20211122A (event #202020046 T0: 2021-12-11T16:58:05.183768)". 2022 FRB 220414 On 14 April 2022, astronomers at Tianlai Cylinder Pathfinder Array (a radio interferometer located in Xinjiang, China, operated by the National Astronomical Observatory, Chinese Academy of Sciences (NAOC)) detected FRB 220414 (?) ("A bright burst was detected with a S/N~15 for ~2.2 ms duration at UT 17:26:40.368, April 14, 2022 (MJD 59684.06018945136)") located at "RA = 13h04m21s(\pm 2m12s), DEC = +48\deg18'05"(\pm 10'19")". FRB 220610 On 19 October 2023, astronomers reported that FRB 20220610A traveled for 8 billion years to reach Earth equivalent at a redshift of making it the oldest FRB known and also calculated to be the most energetic one with a spectral energy density of ~erg/Hz and a maximum burst energy of ~erg higher than the previous predicted maximum energy for FRBs. In January 2024, further detailed observations and studies were reported. FRB 220912 On 15 October 2022, astronomers at CHIME/FRB reported the detection of nine bursts in three days of FRB 20220912A. Since later bursts observed between 15 October 2022 and 29 October 2022 by the CHIME/FRB collaboration, astronomers, afterwards, at the Allen Telescope Array (ATA), on 1 November 2022, reported eight more bursts from FRB 20220912A. ATA coordinates were first set to the original settings (23h09m05.49s + 48d42m25.6s) and then later to the newly updated ones (23h09m04.9s +48d42m25.4s). On 13 November 2022, further burst activity of FRB 20220912A was reported by the Tianlai Dish Pathfinder Array in Xinjiang, China and, on 5 December 2022, from several other observatories. On 13 December 2022, over a hundred bursts from FRB 220912A were reported by the Upgraded Giant Metrewave Radio Telescope (uGMRT), operated by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research in India. On 21 December 2022, several more bright bursts of FRB 220912A using the Westerbork-RT1 were reported. Four more bursts were reported on 13 July 2023 by the Medicina Radio Observatory (specifically by the Medicina Northern Cross (MNC) radio telescope) in Bologna, Italy. Based on four bursts, burst rate constraints of FRB 20220912A at various frequencies using the Green Bank 20-meter telescope were reported on 18 August 2023. Swift X-ray observations were reported on 1 September 2023. FRB 191221 On 13 July 2022, the discovery of an unusual FRB 20191221A detected by CHIME was reported. It is a multicomponent pulse (nine or more components) with peaks separated by 216.8ms and lasting an unusually long duration of three seconds. This is the first time such a periodic pulse was detected. FRB 221128 On 1 December 2022, astronomers reported the discovery of FRB 20221128A, using the UTMOST-NS radio telescope located in New South Wales, Australia. According to the astronomers, "The most likely position [of FRB 20221128A] is RA = 07:30(10), DEC = -41:32(1), J2000 which corresponds to Galactic coordinates: Gl = 177.1 deg, Gb = 24.45 deg". Later, on 19 January 2023, a corrected position [of FRB 20221128A] was reported as follows: "The revised FRB position is RA = 07:30(10), DEC = -42:30(1) in equatorial (J2000) coordinates, which corresponds to Galactic coordinates: Gl = 255.1 deg, Gb = -11.4 deg (we additionally note that the Galactic coordinates in ATel #15783 were in error)". FRB 221206 On 6 December 2022, detection of a possible magnetar gamma-ray burst at or near the same time and location as a fast radio burst was reported. 2023 FRB 230814 Discovery of FRB 20230814A by the Deep Synoptic Array (DSA-110) was reported on 16 August 2023, and was determined to be localized (preliminarily) at 22h23m53.9s +73d01m33.3s (J2000). FRB 230905 Observations of FRB 20230905 in the X-ray and UV range by the Neil Gehrels Swift Observatory was reported as bright and non-repeating on 7 September 2023. 2024 FRB 240114 Discovery of a new repeating FRB 20240114A by the CHIME/FRB Collaboration (at position RA (J2000): 321.9162 +- 0.0087 deg, Dec (J2000): 4.3501 +- 0.0124 degrees) was reported on 26 January 2024. The three bursts from the FRB were detected at "2024-01-14 21:50:39, 2024-01-21 21:30:40, and 2024-01-24 21:20:11 UTC", and associated with a galaxy cluster at 425 Mpc. On 5 February 2024, observations of five repeated bursts of FRB 20240114A on 2 February 2024 were reported using the Parkes/Murriyang Ultra Wideband Low (UWL) receiver system. Also on 5 February 2024, a FRB detection was reported by the Westerbork RT1 25-m telescope. On 8 February 2024, related observations of FRB 20240114A were reported by FAST (38 bursts from 28 January to 4 February) and the Northern Cross Radio Telescope (1 burst on 1 February). Detection and localization studies of FRB 20240114A by MeerKAT in South Africa were reported on 14 February 2024. On 15 February 2024, 10 bursts were reported to have been detected on 1 February 2024 by the Giant Metrewave Radio Telescope (GMRT) in India. On 29 February 2024, 51 bursts (including micro-structure) on 25 February 2024 using uGMRT were reported. On 5 March 2024, a "burst storm" was reported from FRB 20240114A by the FAST radio telescope. On 20 March 2024, the European VLBI Network (EVN) reported several detailed studies, which included observations on 15 February 2024 (7 bursts) and 20 February 2024 (13 bursts), of FRB 20240114A was observed on 17 March 2024240114A. On 21 March 2024, the Northern Cross Radio Telescope in Italy reported a bright radio burst of FRB 20240114A, at updated coordinates of R.A.: 21:27:39.84, Dec: +04:19:46.34 (J2000), on 17 March 2024. On 2 April 2024, astronomers report over 100 detections of FRB 20240114A using five small European radio telescopes. On 18 April 2024, a coincident gamma-ray emission was observed possibly associated with FRB 20240114A. On 23 April 2024, five repeat bursts from FRB 20240114A were reported to have been detected by the Nancay Radio Telescope at 2.5 GHz ("highest frequency to date") on 18 April 2024. On 25 April 2024, eight repeat bursts from FRB 20240114A were reported to have been detected by the Allen Telescope Array (ATA) at frequencies above 2.0 GHz. On 26 April 2024, no counterpart candidates (ie, "no significance gamma-ray emission") from FRB 20240114A were reported to have been observed by Fermi-LAT. On 4 May 2024, astronomers reported a redshift (ie, "a common redshift of z=0.1300+/-0002") for the FRB host galaxy, possibly a dwarf star-forming galaxy. Astronomers, on 15 May 2024, reported multiple burst detections of FRB 20240114A up to 6 GHz using the Effelsberg 100-m Radio Telescope. A gamma-ray flare associated with FRB 20240114A was reported on 25 May 2024. FRB 240216 Announcement of five bursts from FRB 20240216A, a new repeating fast radio burst source, detected by Australian SKA Pathfinder (ASKAP) at position (J2000) of RA: 10:12:19.9 DEC: +14:02:26, was reported on 22 February 2024. FAST, on February 24, 2024, reported no detection, with several explanations, of FRB 20240216A. List of notable bursts All FRBs are cataloged at TNS.
Physical sciences
Stellar astronomy
Astronomy
47959320
https://en.wikipedia.org/wiki/Hyaenodonta
Hyaenodonta
Hyaenodonta ("hyena teeth") is an extinct order of hypercarnivorous placental mammals of clade Pan-Carnivora from mirorder Ferae. Hyaenodonts were important mammalian predators that arose during the early Paleocene in Europe and persisted well into the late Miocene. Characteristics Hyaenodonts are characterized by long, often disproportionately large skulls, slender jaws, and slim bodies. They generally ranged in size from 30 to 140 cm at the shoulder. While Simbakubwa kutokaafrika may have been up to (surpassing the modern polar bear in size), this estimate is suspect due to being based on skull-body size ratios derived from felids, which have much smaller skulls for their body size. Other large hyaenodonts include two close and later-surviving relatives of Simbakubwa, Hyainailouros and Megistotherium (the latter likely being the largest in the group), and the much earlier-living Hyaenodon gigas (the largest species from genus Hyaenodon), which may have been as large as 1.4 m high at the shoulder, 3.0 m long and weighed about 330 kg. Most hyaenodonts, however, were in the 5–15 kg range, equivalent to a mid-sized dog. The anatomy of their skulls show that they had a particularly acute sense of smell, while their teeth were adapted for shearing, rather than crushing. Hyaenodonts were ancestrally plantigrade, but the later, larger forms were generally digitigrade or semidigitigrade. Because of their size range, it is probable that different species hunted in different ways, which allowed them to fill many different predatory niches, with small or medium-sized forms filling roles similar to mustelids or smaller felids of today while the larger forms functioned as apex predators focusing on larger prey, wielding their mighty jaws as their principal weapon as they lacked grasping forelimbs. The carnassials in a hyaenodonts are generally the second upper and third lower molars. However, some hyaenodonts possessed as many as three sequential pairs of carnassials or carnassial-like molar teeth in their jaws. Hyaenodonts, like all creodonts, lacked post-carnassial crushing molar teeth, such as those found in many carnivoran families, especially the Canidae and Ursidae, and thus lacked dental versatility for processing any foods other than meat. Hyaenodonts differed from Carnivora in that they replaced their deciduous dentition slower in development than carnivorans. Studies on Hyaenodon show that juveniles took 3 to 4 years in the last stage of tooth eruption, implying a very long adolescent phase. In North American forms, the first upper premolar erupts before the first upper molar, while European forms show an earlier eruption of the first upper molar. At least one hyaenodont lineage, subfamily Apterodontinae, was specialised for aquatic, otter-like habits. Range Having evolved in Europe during the Paleocene, hyaenodonts soon after spread into Africa and India, implying close biogeographical connections between these areas. Afterwards, they dispersed into Asia from either Europe or India, and finally, North America from either Europe or Asia. They were important hypercarnivores in Eurasia, Africa, and North America during the Oligocene, but declined towards the end of the epoch, with almost the entire order becoming extinct by the close of the Oligocene. Several representatives of this order, including hyainailourids Megistotherium, Simbakubwa, Hyainailouros, Sectisodon, Exiguodon, Sivapterodon, Metapterodon, and Isohyaenodon, the prionogalid Prionogale, the teratodontid Dissopsalis and the youngest species of genus Hyaenodon, H. weilini, survived into or evolved during the Miocene, of which, only Dissopsalis survived long enough to go extinct at the close of the Miocene. Traditionally, this has been attributed to competition with carnivorans, but no formal examination of the correlation between the decline of hyaenodonts and the expansion of carnivorans has been recorded, and the latter may simply have moved into vacant niches after the extinction of hyaenodont species. Classification and phylogeny Relations Hyaenodonts were considerably more widespread and successful than the oxyaenids, the other clade of mammals originally classified along with the hyaenodonts as part of Creodonta. In 2015 phylogenetic analysis of Paleogene mammals, by Halliday et al., monophyly of Creodonta was supported and was placed in the clade Ferae, closer to Pholidota than to Carnivora. However, order Creodonta is now considered to be a polyphyletic wastebasket taxon containing two unrelated clades assumed to be closely related (or ancestral) to Carnivora. Taxonomy ichnotaxa of Hyaenodonta: Ichnogenus: †Creodontipus Ichnogenus: †Dischidodacylus Ichnogenus: †Sarcotherichnus Ichnofamily: †Sarjeantipodidae Ichnogenus: †Hyaenodontipus Ichnogenus: †Quiritipes Ichnogenus: †Sarjeantipes |}
Biology and health sciences
Mammals: General
Animals
47962742
https://en.wikipedia.org/wiki/Tally%20marks
Tally marks
Tally marks, also called hash marks, are a form of numeral used for counting. They can be thought of as a unary numeral system. They are most useful in counting or tallying ongoing results, such as the score in a game or sport, as no intermediate results need to be erased or discarded. However, because of the length of large numbers, tallies are not commonly used for static text. Notched sticks, known as tally sticks, were also historically used for this purpose. Early history Counting aids other than body parts appear in the Upper Paleolithic. The oldest tally sticks date to between 35,000 and 25,000 years ago, in the form of notched bones found in the context of the European Aurignacian to Gravettian and in Africa's Late Stone Age. The so-called Wolf bone is a prehistoric artifact discovered in 1937 in Czechoslovakia during excavations at Dolní Věstonice, Moravia, led by Karl Absolon. Dated to the Aurignacian, approximately 30,000 years ago, the bone is marked with 55 marks which may be tally marks. The head of an ivory Venus figurine was excavated close to the bone. The Ishango bone, found in the Ishango region of the present-day Democratic Republic of Congo, is dated to over 20,000 years old. Upon discovery, it was thought to portray a series of prime numbers. In the book How Mathematics Happened: The First 50,000 Years, Peter Rudman argues that the development of the concept of prime numbers could only have come about after the concept of division, which he dates to after 10,000 BC, with prime numbers probably not being understood until about 500 BC. He also writes that "no attempt has been made to explain why a tally of something should exhibit multiples of two, prime numbers between 10 and 20, and some numbers that are almost multiples of 10." Alexander Marshack examined the Ishango bone microscopically, and concluded that it may represent a six-month lunar calendar. Clustering Tally marks are typically clustered in groups of five for legibility. The cluster size 5 has the advantages of (a) easy conversion into decimal for higher arithmetic operations and (b) avoiding error, as humans can far more easily correctly identify a cluster of 5 than one of 10. Writing systems Roman numerals, the Brahmi and Chinese numerals for one through three (一 二 三), and rod numerals were derived from tally marks, as possibly was the ogham script. Base 1 arithmetic notation system is a unary positional system similar to tally marks. It is rarely used as a practical base for counting due to its difficult readability. The numbers 1, 2, 3, 4, 5, 6 ... would be represented in this system as 1, 11, 111, 1111, 11111, 111111 ... Base 1 notation is widely used in type numbers of flour; the higher number represents a higher grind. Unicode In 2015, Ken Lunde and Daisuke Miura submitted a proposal to encode various systems of tally marks in the Unicode Standard. However, the box tally and dot-and-dash tally characters were not accepted for encoding, and only the five ideographic tally marks (正 scheme) and two Western tally digits were added to the Unicode Standard in the Counting Rod Numerals block in Unicode version 11.0 (June 2018). Only the tally marks for the numbers 1 and 5 are encoded, and tally marks for the numbers 2, 3 and 4 are intended to be composed from sequences of tally mark 1 at the font level.
Mathematics
Basics
null
31012716
https://en.wikipedia.org/wiki/Spongia%20officinalis
Spongia officinalis
Spongia officinalis, better known as a variety of bath sponge, is a commercially used sea sponge. Individuals grow in large lobes with small openings and are formed by a mesh of primary and secondary fibers. It is light grey to black in color. It is found throughout the Mediterranean Sea up to 100 meters deep on rocky or sandy surfaces. Spongia officinalis can reproduce both asexually, through budding or fragmentation, or sexually. Individuals can be dioecious or sequential hermaphrodites. The free-swimming larvae are lecithotrophic and grow slowly after attaching to a benthic surface. Humans use and interact with S. officinalis in a variety of ways. Harvested sponges have been used throughout history for many purposes, including washing and painting. Over-harvesting and sponge disease have led to a decrease in population. Sponge fishing practices have slowly changed over time as new technology has developed and sponge farming is now in use to decrease stress on wild S. officinalis populations. Sponge farming is also recommended as a solution to reducing marine organic pollution, especially from fish farms. Anatomy and morphology Spongia officinalis grows in massive, globular lobes with fine openings which are slightly elevated and have cone-shaped voids (conules). Oscula can either be scattered or at the tip of the lobes. Spongia officinalis have an ectosomal skeleton composed of primary and secondary fibers. Together, they form the conulose openings. The sponge also contains a choanosomal skeleton, which consists of a dense, irregular mesh of polygons formed by secondary fibers and primary fibers rise from it. The primary fibers are 50 to 100 nanometers in diameter and are composed of spongin and inclusions such as sand grains and spicules. The secondary fibers are 20 to 35 nanometers in diameter and are composed of only spongin without inclusions. Spongia officinalis is light grey to black in color. Distribution and habitat Spongia officinalis can be found in the Mediterranean Sea along the coasts of Croatia, Greece, the Aegean islands, Turkey, Cyprus, Syria, Egypt, Libya, Tunisia, Italy, France and Spain. They are distributed in shallow water (1 to 10 meters below the surface) down to 100 meters deep. They will grow on littoral rocky surfaces, sandy bottoms, and vertical walls in well-oxygenated water. Reproduction Spongia officinalis can reproduce asexually via budding or fragmentation. Sexual reproduction is also common in S. officinalis. Individuals can be dioecious, either male or female, or sequential hermaphrodites, meaning they can alternate between male and female. Successive hermaphroditism can take place within one reproductive season. Sperm is formed in spermatic cysts and is free spawned into the surrounding water. Sperm is captured by females and is transported to oocytes within the sponge where fertilization takes place. The occurrence of sexual reproduction peaks from October to November. There is no relationship between age and reproductive ability in S. officinalis. Life cycle After fertilization, S. officinalis embryos develop in choanosomal tissue of the female sponge. Cleavage of cells begins after fertilization, around November, and is total and equal. By May, a stereoblastula, or a blastula without a clear central cavity, forms. From May to July, parenchymella larva, or larva which is a mass of cells enveloped in flagellated cells, develop. These larvae are released from the adult from June to July. Like all sponges, S. officinalis larvae are lecithotrophic, meaning they cannot feed as larva and instead rely on energy reserves provided by the mother. Therefore, they only remain as a free-floating larva for a short period before settling on a benthic surface where they grow into an adult sponge. Taxonomy Spongia officinalis was first described by Carl Linnaeus in 1759. The common names "bath sponge," "Fina Dalmata," and "Matapas" are usually used to refer to this species. Human uses and interactions Uses The use of bath sponges for bathing and other purposes originated in Greece and spread all around Europe during the Middle Ages. From there, the use of sponges spread further, with Mediterranean bath sponges currently being shipped globally. S. officinalis was used by humans in many ways in the past. Aside from using the sponge for washing, some of these uses included padding in Roman soldiers helmets, as absorbent material during surgeries, as medicine to help digestive issues, and as a primitive "contraceptive sponge". Today, sponges are still used for washing and are also used for recreational purposes, like sponge painting. Fishing practices Sponge fishing in the Mediterranean has been in practice since ancient times. Aristotle even wrote of it around 350 BC. Traditionally, sponge fishing was practiced by Greeks who dove underwater to collect specimens. The practice remained this way until the late 19th century. There was a small increase in sponge fishing at the end of the 19th century due to the invention of a new diving suit, but the suit was not very safe so sponge fishing did not grow much in popularity. Around 1910 to 1930, an underwater breathing device was created and, since then, this method of sponge fishing has continually grown in popularity. Sponges can also be collected after they wash up on beaches or they can be fished from a boat. Farming As S. officinalis populations declined due to over-harvest, as discussed below, interest in cultivation increased. Towards the end of the 19th century, the first sponge farming attempts were made in the Mediterranean Sea by fixing sponge fragments onto wooden boxes and setting them into suitable habitats. Although the efforts were successful, sponge farming activity did not increase significantly until the end of the 20th century and currently, it is performed worldwide. Sponge farming not only decreases stress on S. officinalis populations, it also can be used as a sustainable method to reduce marine organic pollution because, sponges being filter feeders, they efficiently remove organic suspended particles from water. For this reason, sponge cultivation in combination with fish farming has been recommended as a method to reduce organic pollution from fish farms. Conservation status Over-harvesting and sponge disease have led to a decrease in Mediterranean S. officinalis populations. People have harvested sponges in the Mediterranean since ancient times. Growing demand has led to overexploitation of these sponges. Beginning in the 1980s, populations of S. officinalis in the Mediterranean have significantly declined. In addition to this, a sponge disease caused by pathogenic bacteria and fungi has further reduced populations. The bacteria and fungi destroy tissues and fibers of the sponges, making them weak. Due to the regenerative abilities of these sponges, they are able to set aside infected tissue and recover. But, when the effects of the disease are compounded by the effects of over-harvesting, populations have struggled to recover and local extinctions have occurred.
Biology and health sciences
Porifera
Animals
24956783
https://en.wikipedia.org/wiki/Matching%20polynomial
Matching polynomial
In the mathematical fields of graph theory and combinatorics, a matching polynomial (sometimes called an acyclic polynomial) is a generating function of the numbers of matchings of various sizes in a graph. It is one of several graph polynomials studied in algebraic graph theory. Definition Several different types of matching polynomials have been defined. Let G be a graph with n vertices and let mk be the number of k-edge matchings. One matching polynomial of G is Another definition gives the matching polynomial as A third definition is the polynomial Each type has its uses, and all are equivalent by simple transformations. For instance, and Connections to other polynomials The first type of matching polynomial is a direct generalization of the rook polynomial. The second type of matching polynomial has remarkable connections with orthogonal polynomials. For instance, if G = Km,n, the complete bipartite graph, then the second type of matching polynomial is related to the generalized Laguerre polynomial Lnα(x) by the identity: If G is the complete graph Kn, then MG(x) is an Hermite polynomial: where Hn(x) is the "probabilist's Hermite polynomial" (1) in the definition of Hermite polynomials. These facts were observed by . If G is a forest, then its matching polynomial is equal to the characteristic polynomial of its adjacency matrix. If G is a path or a cycle, then MG(x) is a Chebyshev polynomial. In this case μG(1,x) is a Fibonacci polynomial or Lucas polynomial respectively. Complementation The matching polynomial of a graph G with n vertices is related to that of its complement by a pair of (equivalent) formulas. One of them is a simple combinatorial identity due to . The other is an integral identity due to . There is a similar relation for a subgraph G of Km,n and its complement in Km,n. This relation, due to Riordan (1958), was known in the context of non-attacking rook placements and rook polynomials. Applications in chemical informatics The Hosoya index of a graph G, its number of matchings, is used in chemoinformatics as a structural descriptor of a molecular graph. It may be evaluated as mG(1) . The third type of matching polynomial was introduced by as a version of the "acyclic polynomial" used in chemistry. Computational complexity On arbitrary graphs, or even planar graphs, computing the matching polynomial is #P-complete . However, it can be computed more efficiently when additional structure about the graph is known. In particular, computing the matching polynomial on n-vertex graphs of treewidth k is fixed-parameter tractable: there exists an algorithm whose running time, for any fixed constant k, is a polynomial in n with an exponent that does not depend on k . The matching polynomial of a graph with n vertices and clique-width k may be computed in time nO(k) .
Mathematics
Graph theory
null
23473595
https://en.wikipedia.org/wiki/Light-year
Light-year
A light-year, alternatively spelled light year (ly or lyr), is a unit of length used to express astronomical distances and is equal to exactly , which is approximately 5.88 trillion mi. As defined by the International Astronomical Union (IAU), a light-year is the distance that light travels in vacuum in one Julian year (365.25 days). Despite its inclusion of the word "year", the term should not be misinterpreted as a unit of time. The light-year is most often used when expressing distances to stars and other distances on a galactic scale, especially in non-specialist contexts and popular science publications. The unit most commonly used in professional astronomy is the parsec (symbol: pc, about 3.26 light-years). Definitions As defined by the International Astronomical Union (IAU), the light-year is the product of the Julian year (365.25 days, as opposed to the 365.2425-day Gregorian year or the 365.24219-day Tropical year that both approximate) and the speed of light (). Both of these values are included in the IAU (1976) System of Astronomical Constants, used since 1984. From this, the following conversions can be derived: {| |- |rowspan=6 valign=top|1 light-year   |= metres (exactly) |- |≈ petametres |- |≈ trillion (short scale) kilometres ( trillion miles) |- |≈ astronomical units |- |≈ parsec |} The abbreviation used by the IAU for light-year is "ly", International standards like ISO 80000:2006 (now superseded) have used "l.y." and localized abbreviations are frequent, such as "al" in French, Spanish, and Italian (from année-lumière, año luz and anno luce, respectively), "Lj" in German (from Lichtjahr), etc. Before 1984, the tropical year (not the Julian year) and a measured (not defined) speed of light were included in the IAU (1964) System of Astronomical Constants, used from 1968 to 1983. The product of Simon Newcomb's J1900.0 mean tropical year of ephemeris seconds and a speed of light of produced a light-year of (rounded to the seven significant digits in the speed of light) found in several modern sources was probably derived from an old source such as C. W. Allen's 1973 Astrophysical Quantities reference work, which was updated in 2000, including the IAU (1976) value cited above (truncated to 10 significant digits). Other high-precision values are not derived from a coherent IAU system. A value of found in some modern sources is the product of a mean Gregorian year (365.2425 days or ) and the defined speed of light (). Another value, , is the product of the J1900.0 mean tropical year and the defined speed of light. Abbreviations used for light-years and multiples of light-years are: "ly" for one light-year "kly" or "klyr" for a kilolight-year (1,000 light-years) "Mly" for a megalight-year (1,000,000 light-years) "Gly" or "Glyr" for a gigalight-year ( light-years) History The light-year unit appeared a few years after the first successful measurement of the distance to a star other than the Sun, by Friedrich Bessel in 1838. The star was 61 Cygni, and he used a heliometre designed by Joseph von Fraunhofer. The largest unit for expressing distances across space at that time was the astronomical unit, equal to the radius of the Earth's orbit at . In those terms, trigonometric calculations based on 61 Cygni's parallax of 0.314 arcseconds, showed the distance to the star to be . Bessel added that light takes 10.3 years to traverse this distance. He recognized that his readers would enjoy the mental picture of the approximate transit time for light, but he refrained from using the light-year as a unit. He may have resisted expressing distances in light-years because it would reduce the accuracy of his parallax data due to multiplying with the uncertain parameter of the speed of light. The speed of light was not yet precisely known in 1838; the estimate of its value changed in 1849 (Fizeau) and 1862 (Foucault). It was not yet considered to be a fundamental constant of nature, and the propagation of light through the aether or space was still enigmatic. The light-year unit appeared in 1851 in a German popular astronomical article by Otto Ule. Ule explained the oddity of a distance unit name ending in "year" by comparing it to a walking hour (Wegstunde). A contemporary German popular astronomical book also noticed that light-year is an odd name. In 1868 an English journal labelled the light-year as a unit used by the Germans. Eddington called the light-year an inconvenient and irrelevant unit, which had sometimes crept from popular use into technical investigations. Although modern astronomers often prefer to use the parsec, light-years are also popularly used to gauge the expanses of interstellar and intergalactic space. Usage of term Distances expressed in light-years include those between stars in the same general area, such as those belonging to the same spiral arm or globular cluster. Galaxies themselves span from a few thousand to a few hundred thousand light-years in diameter, and are separated from neighbouring galaxies and galaxy clusters by millions of light-years. Distances to objects such as quasars and the Sloan Great Wall run up into the billions of light-years. Related units Distances between objects within a star system tend to be small fractions of a light-year, and are usually expressed in astronomical units. However, smaller units of length can similarly be formed usefully by multiplying units of time by the speed of light. For example, the light-second, useful in astronomy, telecommunications and relativistic physics, is exactly metres or of a light-year. Units such as the light-minute, light-hour and light-day are sometimes used in popular science publications. The light-month, roughly one-twelfth of a light-year, is also used occasionally for approximate measures. The Hayden Planetarium specifies the light month more precisely as 30 days of light travel time. Light travels approximately one foot in a nanosecond; the term "light-foot" is sometimes used as an informal measure of time.
Physical sciences
Length and distance
null
23476429
https://en.wikipedia.org/wiki/Intersection%20%28set%20theory%29
Intersection (set theory)
In set theory, the intersection of two sets and denoted by is the set containing all elements of that also belong to or equivalently, all elements of that also belong to Notation and terminology Intersection is written using the symbol "" between the terms; that is, in infix notation. For example: The intersection of more than two sets (generalized intersection) can be written as: which is similar to capital-sigma notation. For an explanation of the symbols used in this article, refer to the table of mathematical symbols. Definition The intersection of two sets and denoted by , is the set of all objects that are members of both the sets and In symbols: That is, is an element of the intersection if and only if is both an element of and an element of For example: The intersection of the sets {1, 2, 3} and {2, 3, 4} is {2, 3}. The number 9 is in the intersection of the set of prime numbers {2, 3, 5, 7, 11, ...} and the set of odd numbers {1, 3, 5, 7, 9, 11, ...}, because 9 is not prime. Intersecting and disjoint sets We say that if there exists some that is an element of both and in which case we also say that . Equivalently, intersects if their intersection is an , meaning that there exists some such that We say that if does not intersect In plain language, they have no elements in common. and are disjoint if their intersection is empty, denoted For example, the sets and are disjoint, while the set of even numbers intersects the set of multiples of 3 at the multiples of 6. Algebraic properties Binary intersection is an associative operation; that is, for any sets and one has Thus the parentheses may be omitted without ambiguity: either of the above can be written as . Intersection is also commutative. That is, for any and one has The intersection of any set with the empty set results in the empty set; that is, that for any set , Also, the intersection operation is idempotent; that is, any set satisfies that . All these properties follow from analogous facts about logical conjunction. Intersection distributes over union and union distributes over intersection. That is, for any sets and one has Inside a universe one may define the complement of to be the set of all elements of not in Furthermore, the intersection of and may be written as the complement of the union of their complements, derived easily from De Morgan's laws: Arbitrary intersections The most general notion is the intersection of an arbitrary collection of sets. If is a nonempty set whose elements are themselves sets, then is an element of the of if and only if for every element of is an element of In symbols: The notation for this last concept can vary considerably. Set theorists will sometimes write "", while others will instead write "". The latter notation can be generalized to "", which refers to the intersection of the collection Here is a nonempty set, and is a set for every In the case that the index set is the set of natural numbers, notation analogous to that of an infinite product may be seen: When formatting is difficult, this can also be written "". This last example, an intersection of countably many sets, is actually very common; for an example, see the article on σ-algebras. Nullary intersection In the previous section, we excluded the case where was the empty set (). The reason is as follows: The intersection of the collection is defined as the set (see set-builder notation) If is empty, there are no sets in so the question becomes "which 's satisfy the stated condition?" The answer seems to be . When is empty, the condition given above is an example of a vacuous truth. So the intersection of the empty family should be the universal set (the identity element for the operation of intersection), but in standard (ZF) set theory, the universal set does not exist. However, when restricted to the context of subsets of a given fixed set , the notion of the intersection of an empty collection of subsets of is well-defined. In that case, if is empty, its intersection is . Since all vacuously satisfy the required condition, the intersection of the empty collection of subsets of is all of In formulas, This matches the intuition that as collections of subsets become smaller, their respective intersections become larger; in the extreme case, the empty collection has an intersection equal to the whole underlying set. Also, in type theory is of a prescribed type so the intersection is understood to be of type (the type of sets whose elements are in ), and we can define to be the universal set of (the set whose elements are exactly all terms of type ).
Mathematics
Discrete mathematics
null
23476797
https://en.wikipedia.org/wiki/Social%20anxiety%20disorder
Social anxiety disorder
Social anxiety disorder (SAD), also known as social phobia, is an anxiety disorder characterized by sentiments of fear and anxiety in social situations, causing considerable distress and impairing ability to function in at least some aspects of daily life. These fears can be triggered by perceived or actual scrutiny from others. Individuals with social anxiety disorder fear negative evaluations from other people. Physical symptoms often include excessive blushing, excessive sweating, trembling, palpitations, rapid heartbeat, muscle tension, shortness of breath, and nausea. Stammering may be present, along with rapid speech. Panic attacks can also occur under intense fear and discomfort. Some affected individuals may use alcohol or other drugs to reduce fears and inhibitions at social events. It is common for those with social phobia to self-medicate in this fashion, especially if they are undiagnosed, untreated, or both; this can lead to alcohol use disorder, eating disorders or other kinds of substance use disorders. SAD is sometimes referred to as an illness of lost opportunities where "individuals make major life choices to accommodate their illness". According to ICD-10 guidelines, the main diagnostic criteria of social phobia are fear of being the focus of attention, or fear of behaving in a way that will be embarrassing or humiliating, avoidance and anxiety symptoms. Standardized rating scales can be used to screen for social anxiety disorder and measure the severity of anxiety. The first line of treatment for social anxiety disorder is cognitive behavioral therapy (CBT). Medications such as SSRIs are effective for social phobia, especially paroxetine. CBT is effective in treating this disorder, whether delivered individually or in a group setting. The cognitive and behavioral components seek to change thought patterns and physical reactions to anxiety-inducing situations. The attention given to social anxiety disorder has significantly increased since 1999 with the approval and marketing of drugs for its treatment. Prescribed medications include several classes of antidepressants: selective serotonin reuptake inhibitors (SSRIs), serotonin–norepinephrine reuptake inhibitors (SNRIs), and monoamine oxidase inhibitors (MAOIs). Other commonly used medications include beta blockers and benzodiazepines. History Literary descriptions of shyness can be traced back to the days of Hippocrates around 400 B.C. Hippocrates described someone who "through bashfulness, suspicion, and timorousness, will not be seen abroad; loves darkness as life and cannot endure the light or to sit in lightsome places; his hat still in his eyes, he will neither see, nor be seen by his good will. He dare not come in company for fear he should be misused, disgraced, overshoot himself in gesture or speeches, or be sick; he thinks every man observes him." The first mention of the psychiatric term "social phobia" (phobie des situations sociales) was made in the early 1900s. Psychologists used the term "social neurosis" to describe extremely shy patients in the 1930s. After extensive work by Joseph Wolpe on systematic desensitization, research on phobias and their treatment grew. The idea that social phobia was a separate entity from other phobias came from the British psychiatrist Isaac Marks in the 1960s. This was accepted by the American Psychiatric Association and was first officially included in the third edition of the Diagnostic and Statistical Manual of Mental Disorders. The definition of the phobia was revised in 1989 to allow comorbidity with avoidant personality disorder and introduced generalized social phobia. Social phobia had been largely ignored prior to 1985. After a call to action by psychiatrist Michael Liebowitz and clinical psychologist Richard Heimberg, there was an increase in attention to and research on the disorder. The DSM-IV gave social phobia the alternative name "social anxiety disorder". Research on the psychology and sociology of everyday social anxiety continued. Cognitive behavioural models and therapies were developed for social anxiety disorder. In the 1990s, paroxetine became the first prescription drug in the US approved to treat social anxiety disorder, with others following. Signs and symptoms The 10th version of the International Classification of Diseases (ICD-10) classifies social anxiety as a mental and behavioral disorder. Cognitive aspects In cognitive models of social anxiety disorder, those with social phobias experience dread over how they will present to others. They may feel overly self-conscious, pay high self-attention after the activity, or have high performance standards for themselves. According to the social psychology theory of self-presentation, an affected person attempts to create a well-mannered impression towards others but believes they are unable to do so. Many times, before the potentially anxiety-provoking social situation, they may deliberately review what could go wrong and how to deal with each unexpected case. After the event, they may have the perception that they performed unsatisfactorily. Consequently, they will perceive anything that may have possibly been abnormal as embarrassing. These thoughts may extend for weeks or longer. Cognitive distortions are a hallmark and are learned about in CBT (cognitive-behavioral therapy). Thoughts are often self-defeating and inaccurate. Those with social phobia tend to interpret neutral or ambiguous conversations with a negative outlook and many studies suggest that socially anxious individuals remember more negative memories than those less distressed. Behavioural aspects Social anxiety disorder is a persistent fear of one or more situations in which the person is exposed to possible scrutiny by others and fears that they may do something or act in a way that will be humiliating or embarrassing. It exceeds normal "shyness" as it leads to excessive social avoidance and substantial social or occupational impairment. Feared activities may include almost any type of social interaction, especially small groups, dating, parties, talking to strangers, restaurants, interviews, etc. Those who have social anxiety disorder fear being judged by others in society. In particular, individuals with social anxiety are nervous in the presence of people with authority and feel uncomfortable during physical examinations. People who have this disorder may behave a certain way or say something and then feel embarrassed or humiliated after. As a result, they often choose to isolate themselves from society to avoid such situations. They may also feel uncomfortable meeting people they do not know and act distant when they are with large groups of people. In some cases, they may show evidence of this disorder by avoiding eye contact, or blushing when someone is talking to them. According to psychologist B. F. Skinner, phobias are controlled by escape and avoidance behaviors. Major avoidance behaviors could include an almost pathological or compulsive lying behavior to preserve self-image and avoid judgment in front of others. Minor avoidance behaviors are exposed when a person avoids eye contact and crosses his or her arms to conceal recognizable shaking. A fight-or-flight response is then triggered in such events. Physiological aspects Physiological effects, similar to those in other anxiety disorders, are present in social phobias. In adults, it may cause tears as well as excessive sweating, nausea, difficulty breathing, shaking, and palpitations as a result of the fight-or-flight response. The walk disturbance (where a person is so worried about how they walk that they may lose balance) may appear, especially when passing a group of people. Blushing is commonly exhibited by individuals with social phobia. These visible symptoms further reinforce the anxiety in the presence of others. A 2006 study found that the area of the brain called the amygdala, part of the limbic system, is hyperactive when patients are shown threatening faces or confronted with frightening situations. They found that patients with more severe social phobia showed a correlation with increased response in their amygdalae. People with SAD may avoid looking at other people, and even their surroundings, to a greater extent than their peers, possibly to decrease the risk of eye contact, which can be interpreted as a nonverbal signal of openness to social interaction. Social aspects People with SAD avoid situations that most people consider normal. They may have a hard time understanding how others can handle these situations so easily. People with SAD avoid all or most social situations and hide from others, which can affect their personal relationships. Social phobia can completely remove people from social situations due to the irrational fear of these situations. People with SAD may be addicted to social media networks, have sleep deprivation, and feel good when they avoid human interactions. SAD can also lead to low self-esteem, negative thoughts, major depressive disorder, sensitivity to criticism, and poor social skills that do not improve. People with SAD experience anxiety in a variety of social situations, from important, meaningful encounters, to everyday trivial ones. These people may feel more nervous in job interviews, dates, interactions with authority, or at work. Problematic digital media use Comorbidity SAD shows a high degree of co-occurrence with other psychiatric disorders. In fact, a population-based study found that 66% of those with SAD had one or more additional mental health disorders. SAD often occurs alongside low self-esteem and most commonly clinical depression, perhaps due to a lack of personal relationships and long periods of isolation related to social avoidance. Clinical depression is 1.49 to 3.5 times more likely to occur in those with SAD. Research also indicates that the presence of certain social fears (e.g., avoidance of participating in small groups, avoidance of going to a party) are more likely to trigger comorbid depressive symptoms than other social fears, and thus deserve a very careful audit during clinical assessment among patients with SAD. Anxiety disorders other than SAD are also very common in patients with SAD, in particular generalized anxiety disorder. Avoidant personality disorder is likewise highly correlated with SAD, with comorbidity rates ranging from 25% to 89%. To try to reduce their anxiety and alleviate depression, people with social phobia may use alcohol or other drugs, which can lead to substance use disorders. It is estimated that one-fifth of patients with social anxiety disorder also have alcohol use disorder. However, some research suggests SAD is unrelated to, or even protective against alcohol-related problems. Those who have both alcohol use disorder and social anxiety disorder are more likely to avoid group-based treatments and to relapse compared to people who do not have this combination. Causes Research into the causes of social anxiety and social phobia is wide-ranging, encompassing multiple perspectives from neuroscience to sociology. Scientists have yet to pinpoint the exact causes. Studies suggest that genetics can play a part in combination with environmental factors. Social phobia is not caused by other mental disorders or substance use. Generally, social anxiety begins at a specific point in an individual's life. This will develop over time as the person struggles to recover. Eventually, mild social awkwardness can develop into symptoms of social anxiety or phobia. Passive social media usage may cause social anxiety in some people. Genetics It has been shown that there is a two to a threefold greater risk of having social phobia if a first-degree relative also has the disorder. This could be due to genetics and/or due to children acquiring social fears and avoidance through processes of observational learning or parental psychosocial education. Studies of identical twins brought up (via adoption) in different families have indicated that, if one twin developed social anxiety disorder, then the other was between 30 percent and 50 percent more likely than average to also develop the disorder. To some extent, this "heritability" may not be specific – for example, studies have found that if a parent has any kind of anxiety disorder or clinical depression, then a child is somewhat more likely to develop an anxiety disorder or social phobia. Studies suggest that parents of those with social anxiety disorder tend to be more socially isolated themselves, and shyness in adoptive parents is significantly correlated with shyness in adopted children. Growing up with overprotective and hypercritical parents has also been associated with social anxiety disorder. Adolescents who were rated as having an insecure (anxious-ambivalent) attachment with their mother as infants were twice as likely to develop anxiety disorders by late adolescence, including social phobia. A related line of research has investigated 'behavioural inhibition' in infants – early signs of an inhibited and introspective or fearful nature. Studies have shown that around 10–15 percent of individuals show this early temperament, which appears to be partly due to genetics. Some continue to show this trait into adolescence and adulthood and appear to be more likely to develop a social anxiety disorder. Social experiences A previous negative social experience can be a trigger to social phobia, perhaps particularly for individuals high in "interpersonal sensitivity". For around half of those diagnosed with social anxiety disorder, a specific traumatic or humiliating social event appears to be associated with the onset or worsening of the disorder; this kind of event appears to be particularly related to specific social phobia, for example, regarding public speaking. As well as direct experiences, observing or hearing about the socially negative experiences of others (e.g. a faux pas committed by someone), or verbal warnings of social problems and dangers, may also make the development of a social anxiety disorder more likely. Social anxiety disorder may be caused by the longer-term effects of not fitting in, or being bullied, rejected, or ignored. Shy adolescents or avoidant adults have emphasized unpleasant experiences with peers or childhood bullying or harassment. In one study, popularity was found to be negatively correlated with social anxiety, and children who were neglected by their peers reported higher social anxiety and fear of negative evaluation than other categories of children. Socially phobic children appear less likely to receive positive reactions from peers, and anxious or inhibited children may isolate themselves. Cultural influences Cultural factors that have been related to social anxiety disorder include a society's attitude towards shyness and avoidance, affecting the ability to form relationships or access employment or education, and shame. One study found that the effects of parenting are different depending on the culture: American children appear more likely to develop social anxiety disorder if their parents emphasize the importance of others' opinions and use shame as a disciplinary strategy, but this association was not found for Chinese/Chinese-American children. In China, research has indicated that shy-inhibited children are more accepted than their peers and more likely to be considered for leadership and considered competent, in contrast to the findings in Western countries. Purely demographic variables may also play a role. Problems in developing social skills, or 'social fluency', may be a cause of some social anxiety disorder, through either inability or lack of confidence to interact socially and gain positive reactions and acceptance from others. The studies have been mixed, however, with some studies not finding significant problems in social skills while others have. What does seem clear is that the socially anxious perceive their own social skills to be low. It may be that the increasing need for sophisticated social skills in forming relationships or careers, and an emphasis on assertiveness and competitiveness, is making social anxiety problems more common, at least among the 'middle classes'. An interpersonal or media emphasis on 'normal' or 'attractive' personal characteristics has also been argued to fuel perfectionism and feelings of inferiority or insecurity regarding negative evaluation from others. The need for social acceptance or social standing has been elaborated in other lines of research relating to social anxiety. Substance-induced While alcohol initially relieves social phobia, excessive alcohol misuse can worsen social phobia symptoms and cause panic disorder to develop or worsen during alcohol intoxication and especially during alcohol withdrawal syndrome. This effect is not unique to alcohol but can also occur with long-term use of drugs that have a similar mechanism of action to alcohol such as the benzodiazepines which are sometimes prescribed as tranquillisers. Benzodiazepines possess anti-anxiety properties and can be useful for the short-term treatment of severe anxiety. Like the anticonvulsants, they tend to be mild and well-tolerated, although there is a risk of habit-forming. Benzodiazepines are usually administered orally for the treatment of anxiety; however, occasionally lorazepam or diazepam may be given intravenously for the treatment of panic attacks. The World Council of Anxiety does not recommend benzodiazepines for the long-term treatment of anxiety due to a range of problems associated with long-term use including tolerance, psychomotor impairment, cognitive and memory impairments, physical dependence and a benzodiazepine withdrawal syndrome upon discontinuation of benzodiazepines. Despite increasing focus on the use of antidepressants and other agents for the treatment of anxiety, benzodiazepines have remained a mainstay of anxiolytic pharmacotherapy due to their robust efficacy, rapid onset of therapeutic effect, and generally favorable side effect profile. Treatment patterns for psychotropic drugs appear to have remained stable over the past decade, with benzodiazepines being the most commonly used medication for panic disorder. Many people who are addicted to alcohol or prescribed benzodiazepines when it is explained to them they have a choice between ongoing ill mental health or quitting and recovering from their symptoms decide on quitting alcohol or their benzodiazepines. Symptoms may temporarily worsen however, during alcohol withdrawal or benzodiazepine withdrawal. Psychological factors Research has indicated the role of 'core' or 'unconditional' negative beliefs (e.g. "I am inept") and 'conditional' beliefs nearer to the surface (e.g. "If I show myself, I will be rejected"). They are thought to develop based on personality and adverse experiences and to be activated when the person feels under threat. Recent research has also highlighted that conditional beliefs may also be at play (e.g., "If people see I'm anxious, they'll think that I'm weak"). A secondary factor is self-concealment which involves concealing the expression of one's anxiety or its underlying beliefs. One line of work has focused more specifically on the key role of self-presentational concerns. The resulting anxiety states are seen as interfering with social performance and the ability to concentrate on interaction, which in turn creates more social problems, which strengthens the negative schema. Also highlighted has been a high focus on and worry about anxiety symptoms themselves and how they might appear to others. A similar model emphasizes the development of a distorted mental representation of the self and overestimates of the likelihood and consequences of negative evaluation, and of the performance standards that others have. Such cognitive-behavioral models consider the role of negatively biased memories of the past and the processes of rumination after an event, and fearful anticipation before it. Studies have also highlighted the role of subtle avoidance and defensive factors, and shown how attempts to avoid feared negative evaluations or use of "safety behaviors" can make social interaction more difficult and the anxiety worse in the long run. This work has been influential in the development of cognitive behavioral therapy for social anxiety disorder, which has been shown to have efficacy. Mechanisms There are many studies investigating neural bases of social anxiety disorder. Although the exact neural mechanisms have not been found yet, there is evidence relating social anxiety disorder to imbalance in some neurochemicals and hyperactivity in some brain areas. Neurotransmitters Sociability is closely tied to dopaminergic neurotransmission. In a 2011 study, a direct relation between social status of volunteers and binding affinity of dopamine D2/3 receptors in the striatum was found. Other research shows that the binding affinity of dopamine D2 receptors in the striatum of people with social anxiety is lower than in controls. Some other research shows an abnormality in dopamine transporter density in the striatum of those with social anxiety. However, some researchers have been unable to replicate previous findings of evidence of dopamine abnormality in social anxiety disorder. Studies have shown high prevalence of social anxiety in Parkinson's disease and schizophrenia. In a recent study, social phobia was diagnosed in 50% of Parkinson's disease patients. Other researchers have found social phobia symptoms in patients treated with dopamine antagonists like haloperidol, emphasizing the role of dopamine neurotransmission in social anxiety disorder. Some evidence points to the possibility that social anxiety disorder involves reduced serotonin receptor binding. A recent study reports increased serotonin transporter binding in psychotropic medication-naive patients with generalized social anxiety disorder. Although there is little evidence of abnormality in serotonin neurotransmission, the limited efficacy of medications which affect serotonin levels may indicate the role of this pathway. Paroxetine, sertraline and fluvoxamine are three SSRIs that have been approved by the FDA to treat social anxiety disorder. Some researchers believe that SSRIs decrease the activity of the amygdala. There is also increasing focus on other candidate transmitters, e.g. norepinephrine and glutamate, which may be over-active in social anxiety disorder, and the inhibitory transmitter GABA, which may be under-active in the thalamus. Brain areas The amygdala is part of the limbic system which is related to fear cognition and emotional learning. Individuals with social anxiety disorder have been found to have a hypersensitive amygdala; for example in relation to social threat cues (e.g. perceived negative evaluation by another person), angry or hostile faces, and while waiting to give a speech. Recent research has also indicated that another area of the brain, the anterior cingulate cortex, which was already known to be involved in the experience of physical pain, also appears to be involved in the experience of 'social pain', for example perceiving group exclusion. Recent research also highlighted the potent role of the prefrontal cortex, especially its dorsolateral part, in the maintenance of cognitive biases involved in SAD. A 2007 meta-analysis also found that individuals with social anxiety had hyperactivation in the amygdala and insula areas which are frequently associated with fear and negative emotional processing. Diagnosis ICD-10 defines social phobia as fear of scrutiny by other people leading to avoidance of social situations. The anxiety symptoms may present as a complaint of blushing, hand tremor, nausea, or urgency of urination. Symptoms may progress to panic attacks. Standardized rating scales such as the Social Phobia Inventory, the SPAI-B, Liebowitz Social Anxiety Scale, and the Social Interaction Anxiety Scale can be used to screen for social anxiety disorder and measure the severity of anxiety. DSM-5 Diagnosis DSM-5 defines Social Anxiety Disorder as a marked, or intense, fear or anxiety of social situations in which the individual may be scrutinized by others. DSM-5 Diagnostic Criteria with Diagnostic Features: Marked fear or anxiety about one or more social situations in which the individual is exposed to possible scrutiny by others. Examples include social interactions (e.g., having a conversation, meeting unfamiliar people), being observed (e.g., eating or drinking), and performing in front of others (e.g., giving a speech). Note: In children, the anxiety must occur in peer settings and not just during interactions with adults. The individual fears that he or she will act in a way or show anxiety symptoms that will be negatively evaluated (i.e., will be humiliating or embarrassing: will lead to rejection or offend others). When exposed to such social situations, the individual fears that they will be negatively evaluated. The individual is concerned that they will be judged as anxious, weak, crazy, stupid, boring, intimidating, dirty, or unlikable. The individual fears that they will act or appear in a certain way or show anxiety symptoms, such as blushing, trembling, sweating, stumbling over one's words, or staring, that will be negatively evaluated by others. The social situations almost always provoke fear or anxiety. Thus, an individual who becomes anxious only occasionally in the social situation(s) would not be diagnosed with social anxiety disorder. Note: In children, the fear or anxiety may be expressed by crying, tantrums, freezing, clinging, shrinking, or failing to speak in social situations. The social situations are avoided. Alternatively, the situations are endured with intense fear or anxiety. The fear or anxiety is out of proportion to the actual threat posed by the social situation and to the sociocultural context. The fear or anxiety is judged to be out of proportion to the actual risk of being negatively evaluated or to the consequences of such negative evaluation. Sometimes, the anxiety may not be judged to be excessive, because it is related to an actual danger (e.g., being bullied or tormented by others). However, individuals with social anxiety disorder often overestimate the negative consequences of social situations, and thus the judgment of being out of proportion is made by the clinician. The fear, anxiety, or avoidance is persistent, typically lasting for 6 months or more. This duration threshold helps distinguish the disorder from transient social fears that are common, particularly among children and in the community. However, the duration criterion should be used as a general guide, with allowance for some degree of flexibility. The fear, anxiety, or avoidance causes clinically significant distress or impairment in social, occupational, or other important areas of functioning. The fear, anxiety, and avoidance must interfere significantly with the individual's normal routine, occupational or academic functioning, or social activities or relationships, or must cause clinically significant distress or impairment in social, occupational, or other important areas of functioning. For example, an individual who is afraid to speak in public would not receive a diagnosis of social anxiety disorder if this activity is not routinely encountered on the job or in classroom work, and if the individual is not significantly distressed about it. However, if the individual avoids, or is passed over for, the job or education they really want because of social anxiety symptoms, criterion is met. The fear, anxiety, or avoidance is not attributable to the physiological effects of a substance (e.g., an addictive substance, a medication) or another medical condition. The fear, anxiety, or avoidance is not better explained by the symptoms of another mental disorder, such as panic disorder, body dysmorphic disorder, or autism spectrum disorder. If another medical condition (e.g., Parkinson disease, obesity, disfigurement from burns or injury) is present, the fear, anxiety, or avoidance is clearly unrelated or is excessive. If the fear is restricted to speaking or performing in public it is performance only social anxiety disorder. Differential diagnosis The DSM-IV criteria stated that an individual cannot receive a diagnosis of social anxiety disorder if their symptoms are better accounted for by one of the autism spectrum disorders such as autism and Asperger syndrome. Because of its close relationship and overlapping symptoms, treating people with social phobia may help understand the underlying connections to other mental disorders. Social anxiety disorder is often linked to bipolar disorder and attention deficit hyperactivity disorder (ADHD) and some believe that they share an underlying cyclothymic-anxious-sensitive disposition. The co-occurrence of ADHD and social phobia is very high, especially when CDS symptoms are present. Prevention Prevention of anxiety disorders is one focus of research. Use of CBT and related techniques may decrease the number of children with social anxiety disorder following completion of prevention programs. Treatment Psychotherapies The first-line treatment for social anxiety disorder is cognitive behavioral therapy (CBT), with medications such as selective serotonin reuptake inhibitors (SSRIs) used only in those who are not interested in therapy. According to research studies, combining the use of CBT with escitalopram (a type of SSRI) in contrast to using CBT with a placebo reduced anticipatory speech-state anxiety and increased reductions of social anxiety symptoms, revealing the potential of combining various treatment methods. Self-help based on principles of CBT is a second-line treatment. There is some emerging evidence for the use of acceptance and commitment therapy (ACT) in the treatment of social anxiety disorder. ACT is considered an offshoot of traditional CBT and emphasizes accepting unpleasant symptoms rather than fighting against them, as well as psychological flexibility – the ability to adapt to changing situational demands, to shift one's perspective, and to balance competing desires. ACT may be useful as a second line treatment for this disorder in situations where CBT is ineffective or refused. Some studies have suggested social skills training (SST) can help with social anxiety. Examples of social skills focused on during SST for social anxiety disorder include: initiating conversations, establishing friendships, interacting with members of the preferred sex, constructing a speech and assertiveness skills. However, it is not clear whether specific social skills techniques and training are required, rather than just support with general social functioning and exposure to social situations. There is some evidence that expressive therapies (e.g. painting, drawing or musical therapy) can be effective for treating social anxiety disorder in certain contexts. A 2019 study, for example, found that art therapy produced an "increase in subjective quality of life (both with large effects) and an improvement in accessibility of emotion regulation strategies" in adult women with anxiety. Both VAGA and the American Art Therapy Association run specific workshops for social anxiety disorder. Furthermore, error-related brain activity varies in accordance to factors that affect the motivational significance of behavioural performance, such as social contexts and personality traits, suggesting that understanding how individuals appraise the relevance of incentives in a given context is crucial for designing interventions to ameliorate or prevent maladaptive patterns of performance evaluation, particularly with regards to social anxiety disorder and substance abuse. Given the evidence that social anxiety disorder may predict subsequent development of other psychiatric disorders such as depression, early diagnosis and treatment is important. Social anxiety disorder remains under-recognized in primary care practice, with patients often presenting for treatment only after the onset of complications such as clinical depression or substance use disorders. Medications SSRIs Selective serotonin reuptake inhibitors (SSRIs), a class of antidepressants, are the first choice of medication for generalized social phobia but a second-line treatment. Compared to older forms of medication, there is less risk of tolerability and drug dependency associated with SSRIs. Paroxetine and paroxetine CR, sertraline, escitalopram, venlafaxine XR and fluvoxamine CR (Luvox CR) are all approved for SAD and are all effective for it, especially paroxetine. All SSRIs are somewhat effective for social anxiety except fluoxetine which was equivalent to placebo in all clinical trials. Paroxetine was able to change personality and significantly increase extraversion. In a 1995 double-blind, placebo-controlled trial, the SSRI paroxetine was shown to result in clinically meaningful improvement in 55% of patients with generalized social anxiety disorder, compared with 23.9% of those taking placebo. An October 2004 study yielded similar results. Patients were treated with either fluoxetine, psychotherapy, or a placebo. The first four sets saw improvement in 50.8 to 54.2 percent of the patients. Of those assigned to receive only a placebo, 31.7% achieved a rating of 1 or 2 on the Clinical Global Impression-Improvement scale. Those who sought both therapy and medication did not see a boost in improvement. In double-blind, placebo-controlled trials other SSRIs like fluvoxamine, escitalopram and sertraline showed reduction of social anxiety symptoms, including anxiety, sensitivity to rejection and hostility. Citalopram also appears to be effective. General side-effects are common during the first weeks while the body adjusts to the drug. Symptoms may include headaches, nausea, insomnia and changes in sexual behavior. Treatment safety during pregnancy has not been established. In late 2004 much media attention was given to a proposed link between SSRI use and suicidality [a term that encompasses suicidal ideation and attempts at suicide as well as suicide]. For this reason, [although evidential causality between SSRI use and actual suicide has not been demonstrated] the use of SSRIs in pediatric cases of depression is now recognized by the Food and Drug Administration as warranting a cautionary statement to the parents of children who may be prescribed SSRIs by a family doctor. Recent studies have shown no increase in rates of suicide. These tests, however, represent those diagnosed with depression, not necessarily with social anxiety disorder. In addition, studies show that more socially phobic patients treated with anti-depressant medication develop hypomania than non-phobic controls. The hypomania can be seen as the medication creating a new problem. Other drugs Other prescription drugs are also used, if other methods are not effective. Before the introduction of SSRIs, monoamine oxidase inhibitors (MAOIs) such as phenelzine were frequently used in the treatment of social anxiety. Evidence continues to indicate that MAOIs are effective in the treatment and management of social anxiety disorder and they are still used, but generally only as a last resort medication, owing to concerns about dietary restrictions, possible adverse drug interactions and a recommendation of multiple doses per day. A newer type of this medication, reversible inhibitors of monoamine oxidase subtype A (RIMAs) such as the drug moclobemide, bind reversibly to the MAO-A enzyme, greatly reducing the risk of hypertensive crisis with dietary tyramine intake. However, RIMAs have been found to be less efficacious for social anxiety disorder than irreversible MAOIs like phenelzine. Benzodiazepines are an alternative to SSRIs. These drugs' recommended usage is for short-term relief, meaning a limited time frame of over a year, of severe, disabling anxiety. Although benzodiazepines are still sometimes prescribed for long-term everyday use in some countries, there is concern over the development of drug tolerance, dependency and misuse. It has been recommended that benzodiazepines be considered only for individuals who fail to respond to other medications. Benzodiazepines augment the action of GABA, the major inhibitory neurotransmitter in the brain; effects usually begin to appear within minutes or hours. In most patients, tolerance rapidly develops to the sedative effects of benzodiazepines, but not to the anxiolytic effects. Long-term use of a benzodiazepine may result in physical dependence, and abrupt discontinuation of the drug should be avoided due to high potential for withdrawal symptoms (including tremor, insomnia, and in rare cases, seizures). A gradual tapering of the dose of clonazepam (a decrease of 0.25 mg every 2 weeks), however, is well tolerated by patients with social anxiety disorder. Benzodiazepines are not recommended as monotherapy for patients who have major depression in addition to social anxiety disorder and should be avoided in patients with a history of substance use. Certain anticonvulsant drugs such as gabapentin and pregabalin are effective in social anxiety disorder and may be a possible treatment alternative to benzodiazepines. However there is concern regarding their off-label use due to the lack of strong scientific evidence for their efficacy and their proven side effects. Serotonin-norepinephrine reuptake inhibitors (SNRIs) such as venlafaxine have shown similar effectiveness to the SSRIs. In Japan, Milnacipran is used in the treatment of Taijin kyofusho, a Japanese variant of social anxiety disorder. The atypical antidepressants mirtazapine and bupropion have been studied for the treatment of social anxiety disorder, and rendered mixed results. Some people with a form of social phobia called performance phobia have been helped by beta-blockers, which are more commonly used to control high blood pressure. Taken in low doses, they control the physical manifestation of anxiety and can be taken before a public performance. A novel treatment approach has recently been developed as a result of translational research. It has been shown that a combination of acute dosing of d-cycloserine (DCS) with exposure therapy facilitates the effects of exposure therapy of social phobia. DCS is an old antibiotic medication used for treating tuberculosis and does not have any anxiolytic properties per se. However, it acts as an agonist at the glutamatergic N-methyl-D-aspartate (NMDA) receptor site, which is important for learning and memory. Kava-kava has also attracted attention as a possible treatment, although safety concerns exist. Epidemiology Social anxiety disorder is known to appear at an early age in most cases. Fifty percent of those who develop this disorder have developed it by the age of 11, and 80% have developed it by age 20. This early age of onset may lead to people with social anxiety disorder being particularly vulnerable to depressive illnesses, substance use, and other psychological conflicts. When prevalence estimates were based on the examination of psychiatric clinic samples, social anxiety disorder was thought to be a relatively rare disorder. The opposite was found to be true; social anxiety was common, but many were afraid to seek psychiatric help, leading to an underrecognition of the problem. The National Comorbidity Survey of over 8,000 American correspondents in 1994 revealed 12-month and lifetime prevalence rates of 7.9 percent and 13.3 percent, respectively; this makes it the third most prevalent psychiatric disorder after depression and alcohol use disorder, and the most common of the anxiety disorders. According to US epidemiological data from the National Institute of Mental Health, social phobia affects 15 million adult Americans in any given year. Estimates vary within 2 percent and 7 percent of the US adult population. The mean onset of social phobia is 10 to 13 years. Onset after age 25 is rare and is typically preceded by panic disorder or major depression. Social anxiety disorder occurs more often in females than males. The prevalence of social phobia appears to be increasing among white, married, and well-educated individuals. As a group, those with generalized social phobia are less likely to graduate from high school and are more likely to rely on government financial assistance or have poverty-level salaries. Surveys carried out in 2002 show the youth of England, Scotland, and Wales have a prevalence rate of 0.4 percent, 1.8 percent, and 0.6 percent, respectively. In Canada, the prevalence of self-reported social anxiety for Nova Scotians older than 14 years was 4.2 percent in June 2004 with women (4.6 percent) reporting more than men (3.8 percent). In Australia, social phobia is the 8th and 5th leading disease or illness for males and females between 15 and 24 years of age as of 2003. Because of the difficulty in separating social phobia from poor social skills or shyness, some studies have a large range of prevalence. The table also shows higher prevalence in Sweden. Terminology It has also been referred to as anthropophobia, meaning "fear of humans", from , ánthropos, "human" and , phóbos, "fear". Other names have included interpersonal relation phobia. A specific Japanese cultural form is known as taijin kyofusho. There is also another cultural form of social phobia, Aymat zibur, in the Ultra-Orthodox Jewish community which is mostly rooted in a fear of embarrassment in the performance of religious duties.
Biology and health sciences
Mental disorders
Health
23476857
https://en.wikipedia.org/wiki/Anchovy
Anchovy
An anchovy is a small, common forage fish of the family Engraulidae. Most species are found in marine waters, but several will enter brackish water, and some in South America are restricted to fresh water. More than 140 species are placed in 16 genera; they are found in the Atlantic, Indian and Pacific Oceans, and in the Black Sea and the Mediterranean Sea. Anchovies are usually classified as oily fish. Taxonomy Anchovies are classified into two subfamilies and 16 genera: Superfamily Engrauloidea Genus †Clupeopsis Casier, 1946 Genus †Monosmilus Capobianco et al, 2020 Family Engraulidae Gill, 1861 Subfamily Engraulinae Gill, 1861 Genus Amazonsprattus Roberts, 1984 Genus Anchoa D. S. Jordan & Evermann, 1927 Genus Anchovia D. S. Jordan & Evermann, 1895 Genus Anchoviella Fowler, 1911 Genus Cetengraulis Günther, 1868 Genus Encrasicholina Fowler, 1938 Genus †Eoengraulis Marrama & Carnevale, 2015 Genus Engraulis Cuvier, 1816 Genus Jurengraulis Whitehead, 1988 Genus Lycengraulis Günther, 1868 Genus Pterengraulis Günther, 1868 Genus Stolephorus Lacépède, 1803 Subfamily Coiliinae Bleeker, 1870 Genus Coilia Gray 1830 Genus Lycothrissa Günther, 1868 Genus Papuengraulis Munro, 1964 Genus Setipinna Swainson 1839 Genus Thryssa Cuvier, 1829 Evolution The earliest known fossil records of anchovy relatives are of large predatory stem-anchovies (Clupeopsis and Monosmilus) from the early and middle Eocene of the Tethys Ocean, in Belgium and Pakistan. The large fangs of these early anchovy relatives has led to the nickname "saber-toothed anchovies" (not to be confused with the extant genus Lycengraulis). The earliest record of a true anchovy is of the stem-engrauline Eoengraulis from the Early Eocene of Monte Bolca, Italy. Characteristics Anchovies are small, green fish with blue reflections due to a silver-colored longitudinal stripe that runs from the base of the caudal (tail) fin. They range from in adult length, and their body shapes are variable with more slender fish in northern populations. The snout is blunt with tiny, sharp teeth in both jaws. The snout contains a unique rostral organ, believed to be electro-sensory in nature, although its exact function is unknown. The mouth is larger than that of herrings and silversides, two fish which anchovies closely resemble in other respects. The anchovy eats plankton and recently hatched fish. Distribution Anchovies are found in scattered areas throughout the world's oceans, but are concentrated in temperate waters, and are rare or absent in very cold or very warm seas. They are generally very accepting of a wide range of temperatures and salinity. Large schools can be found in shallow, brackish areas with muddy bottoms, as in estuaries and bays. The European anchovy is abundant in the Mediterranean, particularly in the Alboran Sea, Aegean Sea and the Black Sea. This species is regularly caught along the coasts of Crete, Greece, Sicily, Italy, France, Turkey, Northern Iran, Portugal and Spain. They are also found on the coast of northern Africa. The range of the species also extends along the Atlantic coast of Europe to the south of Norway. Spawning occurs between October and March, but not in water colder than . The anchovy appears to spawn at least from the shore, near the surface of the water. Ecology The anchovy is a significant food source for almost every predatory fish in its environment, including the California halibut, rock fish, yellowtail, shark, chinook, and coho salmon. It is also extremely important to marine mammals and birds; for example, breeding success of California brown pelicans and elegant terns is strongly connected to anchovy abundance. Feeding behavior Anchovies, like most clupeoids (herrings, sardines and anchovies), are filter-feeders that open their mouths as they swim. As water passes through the mouth and out the gills, food particles are sieved by gill rakers and transferred into the esophagus. Commercial species * Type species Fisheries Black Sea On average, the Turkish commercial fishing fleet catches around 300,000 tons per year, mainly in winter. The largest catch is in November and December. Peru The Peruvian anchovy fishery is one of the largest in the world, far exceeding catches of the other anchovy species. In 1972, it collapsed catastrophically due to the combined effects of overfishing and El Niño and did not fully recover for two decades. As food A traditional method of processing and preserving anchovies is to gut and salt them in brine, allow them to cure, and then pack them in oil or salt. This results in a characteristic strong flavor and the flesh turning a deep grey. Pickled in vinegar, as with Spanish boquerones, anchovies are milder and the flesh retains a white color. In Roman times, anchovies were the base for the fermented fish sauce garum. Garum had a sufficiently long shelf life for long-distance commerce, and was produced in industrial quantities. Anchovies were also eaten raw as an aphrodisiac. Today, they are used in small quantities to flavor many dishes. Because of the strong flavor, they are also an ingredient in several sauces and condiments, including Worcestershire sauce, caesar salad dressing, remoulade, Gentleman's Relish, many fish sauces, and in some versions of Café de Paris butter. For domestic use, anchovy fillets are packed in oil or salt in small tins or jars, sometimes rolled around capers. Anchovy paste is also available. Fishermen also use anchovies as bait for larger fish, such as tuna and sea bass. The strong taste people associate with anchovies is due to the curing process. Fresh anchovies, known in Italy as alici, have a much milder flavor. The anchovies from Barcola (in the local dialect: sardoni barcolani) are particularly popular. These white fleshy fish, which are only found at Sirocco in the Gulf of Trieste, achieve the highest prices. In Sweden and Finland, the name "anchovies" is related strongly to a traditional seasoning, hence the product "anchovies" is normally made of sprats and herring can be sold as "anchovy-spiced". Fish from the family Engraulidae are instead known as sardell in Sweden and sardelli in Finland, leading to confusion when translating recipes. In Southeast Asian countries like Indonesia, Singapore, Malaysia and the Philippines, they are deep-fried and eaten as a snack or a side dish. They are known as ikan bilis in Malay, ikan teri in Indonesian and dilis in Filipino.
Biology and health sciences
Clupeiformes
null
31019086
https://en.wikipedia.org/wiki/Thread%20%28yarn%29
Thread (yarn)
A thread is a long strand of material, often composed of several filaments or fibres, used for joining, creating or decorating textiles. Ancient Egyptians were known for creating thread using plant fibers, wool and hair. Today, thread can also be made of many different materials including but not limited to cotton, wool, flax, nylon, silk, polyester etc. There are also metal threads (sometimes used in decorative textiles), which can be made of fine wire. Thread is similar to yarn, cord, twine, or string, and there is some overlap between the way these terms are used. However, thread is most often used to mean materials fine and smooth enough for sewing, embroidery, weaving, or making lace or net. Yarn is often used to mean a thicker and softer material, suitable for knitting and crochet. Cords, twines or strings are usually stronger materials, suitable for tying and fastening. Materials Thread is made from a wide variety of materials. Where a thread is stronger than the material that it is being used to join, if seams are placed under strain the material may tear before the thread breaks. Garments are usually sewn with threads of lesser strength than the fabric so that if stressed the seam will break before the garment. Heavy goods that must withstand considerable stresses such as upholstery, car seating, tarpaulins, tents, and saddlery require very strong threads. Attempting repairs with light weight thread will usually result in rapid failure, though again, using a thread that is stronger than the material being sewn can end up causing rips in that material before the thread itself gives way. Polyester/polyester core spun thread is made by wrapping staple polyester around a continuous polyester filament during spinning and plying these yarns into a sewing thread. Measurement types and labeling Thread gauges Yarns are measured by the density of the yarn, which is described by various units of textile measurement relating to a standardized length per weight. These units do not directly correspond to thread diameter. Weight (Wt.) and Gunze count The most common weight system for thread specifies the length of the thread in kilometres required to weigh 1 kilogram. Therefore, a greater weight number (indicated in the American standard by the abbreviation wt) indicates a thinner, finer thread. The American standard of thread weight was adopted from the Gunze Count standard of Japan, which uses two numbers separated by a forward slash. The first number corresponds to the wt number of the thread and the second number indicates how many strands of fiber were used to compose the finished thread. It is common to wrap three strands of the same weight to make one thread, though this is not a formal requirement in the US standard (which is therefore less informative). Denier A denier weight specification states how many grams 9,000 metres of the thread weighs. Unlike the common thread weight system, the greater the denier number, the thicker the thread. The denier weight system, like the common weight system, also specifies the number of strands of the specified weight which were wrapped together to make the finished thread. Only embroidery threads have their weights given in denier. Tex Tex is the mass in grams of 1,000 metres of thread. If 1,000 m weighs 25 g, it is a tex 25. Larger tex numbers are heavier threads. Tex is used throughout North America and Europe. Silk machine twist Manufacturers producing fine silk threads, apply their own scales of thread measurement using "aughts" or zeroes at the finest end and FFF at the other, thus scaling 000, 00, 0, A, B, C, D, E, F, FF, FFF. The three finest threads are described as having "three aughts", "two aughts", and "one aught" respectively, and as having different "aught counts". Within a given manufacturer's spectrum, a higher "aught count" indicates a finer thread: this may be given as a single digit followed by a forward slash and a zero— for example, 3/0 indicates a three-aught thread or a thread size "000", but this number only has significance when compared to other threads produced by the same manufacturer: one manufacturer's 3/0 will always be more fine than that same manufacturer's 2/0, but may not be comparable to the 3/0 of another manufacturer. Very roughly, however, size A is 900 yards per pound of thread, and every 100 yards difference is one letter size different. The size is always given for the overall thread, not its individual silk plies. Commercial Some heavier duty threads are given "commercial" size designations in set sizes of 30, 46, 69, 92, 138, 207, 277, 346, 415 and 554 only. Each of these numbers is merely the thread's denier size divided by 10. A commercial size 138 thread has a denier of 1380. Conversion information Thread weight conversion table For example: 40 weight = 225 denier = tex 25 = [theoretical] commercial 27.8 . A common tex number for general sewing thread is tex 25 or tex 30. A slightly heavier silk buttonhole thread suitable for bartacking, small leather items, and decorative seams might be tex 40. A strong, durable upholstery thread, tex 75. A heavy-duty topstitching thread for coats, bags, and shoes, tex 100. A very strong topstitching thread suitable for luggage and tarpaulins, tex 265–tex 290. But a fine serging thread is only tex 13. And blindstitching and felling machines, an even finer tex 8. Labeling Threads of different composition and construction may be labeled in a variety of ways. Most threads are composed of 2 or more "plies" of fiber, and this information is often provided on thread packaging along with the finalized thread's weight, according to a particular scale of measurement. The actual physical diameter of a thread is not recorded and is unhelpful. Spools may have codes that indicate their fiber content as well such a "P" for polyester. If a fiber content is given in the label code, it will be the first piece if information located there. Examples Sometimes a manufacturer does not provide any weight specification at all on its spools and instead provides only the fiber content and spool length such as "100% Silk 250 m". This means only that the spool has 250 meters of pure silk, but does not indicate how many plies make up that thread nor what the plies' or the combined thread's weight is. Cotton count The cotton count system is based on the number of 840 yard hanks that will result from a single pound of a particular finished thread. This is the non-metric equivalent of the Gunze count, and is given with two numbers separated by a slash: the first is the size of the thread and the second is the number of plies of that size used in the finished thread. The cotton count was developed for the cotton industry, but cotton counts are also frequently given for polyester and polyester/ cotton blends as well. Hong Kong ticket A Hong Kong ticket number, when present, is a cotton count number without the slash and with the final number always indicating the number of plies if more than one. A Hong Kong ticket number of 1002 is made of two plies of size 100 thread; a number of 100 is made of a single ply of size 100 thread; a size 503 is made of three plies of size 50 thread. Single's equivalent A spool of thread may be described in terms of its "single's equivalent". This is the cotton count size of the thread divided by the number of plies which make it up. A spool of 30/3 thread has a single's equivalent of 10, because a single strand or ply of that thread has a cotton count size of 10. A 20/2 spool has the same single's equivalent as a 30/3, but a 30/2 spool has a single's equivalent of 15, which means it is composed of individually heavier plies than a 30/3. High-temperature sewing threads High temperature sewing threads provide durability and resistance to extreme temperatures. Some threads can be used for applications up to 800 °C (1472 °F). There are a variety of different sewing threads available which have different applications and benefits. Kevlar-coated stainless steel sewing threads have a high-temperature and flame-resistant steel core combined with Kevlar coating designed to facilitate easier machine sewing. The stainless steel core has a temperature resistance of up to 800 °C (1472 °F) and the Kevlar coating is heat-resistant up to 220 °C (428 °F). PTFE coated glass sewing threads have an excellent temperature resistance combined with a PTFE coating to provide easier machine sewing. The glass core has a temperature resistance of up to 550 °C (1022 °F) and the PTFE coating is heat-resistant up to 230 °C (446 °F). Nomex sewing threads are inherently flame-retardant and heat-resistant with a tough protective coating that resists abrasion during the sewing operation. It is temperature resistant up to 370 °C (698 °F). Bonded nylon sewing threads are tough, coated with abrasion resistance, rot proofing, and have good tensile strength for lower temperature applications. They are temperature-resistant up to 120 °C (248 °F). Bonded polyester sewing threads are tough, coated with abrasion resistance, rot proofing, and have exceptional tensile strength for lower temperatures but heavier-duty sewing operations. They are temperature-resistant up to 120 °C (248 °F).
Technology
Fabrics and fibers
null
22016602
https://en.wikipedia.org/wiki/Bar%20%28river%20morphology%29
Bar (river morphology)
A bar in a river is an elevated region of sediment (such as sand or gravel) that has been deposited by the flow. Types of bars include mid-channel bars (also called braid bars and common in braided rivers), point bars (common in meandering rivers), and mouth bars (common in river deltas). The locations of bars are determined by the geometry of the river and the flow through it. Bars reflect sediment supply conditions, and can show where sediment supply rate is greater than the transport capacity. Mid-channel bars A mid-channel bar, is also often referred to as a braid bar because they are often found in braided river channels. Braided river channels are broad and shallow and found in areas where sediment is easily eroded like at a glacial outwash, or at a mountain front with high sediment loads. These types of river systems are associated with high slope, sediment supply, stream power, shear stress, and bed load transport rates. Braided rivers have complex and unpredictable channel patterns, and sediment size tends to vary among streams. It is these features that are responsible for the formations of braid bars. Braided streams are often overfed with massive amounts of sediment which creates multiple stream channels within one dominant pair of flood bank plains. These channels are separated by mid-channel or braid bars. Anastomosing river channels also create mid-channel bars, however they are typically vegetated bars, making them more permanent than the bars found in a braided river channel which have high rates of change because of the large amounts of non-cohesive sediment, lack of vegetation, and high stream powers found in braided river channels. Bars can also form mid-channel due to snags or logjams. For example, if a stable log is deposited mid-channel in a stream, this obstructs the flow and creates local flow convergence and divergence. This causes erosion on the upstream side of the obstruction and deposition on the downstream side. The deposition that occurs on the downstream side can create a central bar, and an arcuate bar can be formed as flow diverges upstream of the obstruction. Continuous deposition downstream can build up the central bar to form an island. Eventually the logjam can become partially buried, which protects the island from erosion, allowing for vegetation to begin to grow, and stabilize the area even further. Over time, the bar can eventually attach to one side of the channel bank and merge into the flood plain. Point bars A point bar is an area of deposition typically found in meandering rivers. Point bars form on the inside of meander bends in meandering rivers. As the flow moves around the inside of the bend in the river, the water slows down because of the shallow flow and low shear stresses there reduce the amount of material that can be carried there. Point bars are usually crescent shaped and located on the inside curve of the river bend. The excess material falls out of transport and, over time, forms a point bar. Point bars are typically found in the slowest moving, shallowest parts of rivers and streams, and are often parallel to the shore and occupy the area farthest from the thalweg, on the outside curve of the river bend in a meandering river. Here, at the deepest and fastest part of the stream is the cut bank, the area of a meandering river channel that continuously undergoes erosion. The faster the water in a river channel, the better it is able to pick up greater amounts of sediment, and larger pieces of sediment, which increases the river's bed load. Over a long enough period of time, the combination of deposition along point bars, and erosion along cut banks can lead to the formation of an oxbow lake. Mouth bars A mouth bar is an elevated region of sediment typically found at a river delta which is located at the mouth of a river where the river flows out to the ocean. Sediment is transported by the river and deposited, mid channel, at the mouth of the river. This occurs because, as the river widens at the mouth, the flow slows, and sediment settles out and is deposited. After initial formation of a river mouth bar, they have the tendency to prograde. This is caused by the pressure from the flow on the upstream face of the bar. This pressure creates erosion on that face of the bar, allowing the flow to transport this sediment over or around, and re-deposit it farther downstream, closer to the ocean. River mouth bars stagnate, or cease to prograde when the water depth above the flow is shallow enough to create a pressure on the upstream side of the bar strong enough to force the flow around the deposit rather than over the top of the bar. This divergent channel flow around either side of the sediment deposit continuously transports sediment, which over time is deposited on either side of this original mid channel deposit. As more and more sediment accumulates across the mouth of the river, it builds up to eventually create a sand bar that has the potential to extend the entire length of the river mouth and block the flow.
Physical sciences
Fluvial landforms
Earth science
43341147
https://en.wikipedia.org/wiki/Byerlee%27s%20law
Byerlee's law
In rheology, Byerlee's law, also known as Byerlee's friction law concerns the shear stress (τ) required to slide one rock over another. The rocks have macroscopically flat surfaces, but the surfaces have small asperities that make them "rough." For a given experiment and at normal stresses (σn) below about 2000 bars (200 MPa) the shear stress increases approximately linearly with the normal stress (τ = 0.85 σn, where τ and σn is in units of MPa) and is highly dependent on rock type and the character (roughness) of the surfaces, see Mohr-Coulomb friction law. Byerlee's law states that with increased normal stress the required shear stress continues to increase, but the rate of increase decreases (τ = 50 + 0.6σn), where τ and σn are in units of MPa, and becomes nearly independent of rock type. The law describes an important property of crustal rock, and can be used to determine when slip along a geological fault takes place. The law is named after the American geophysicist James Byerlee, who derived it experimentally in 1978.
Physical sciences
Geophysics
Earth science
28227402
https://en.wikipedia.org/wiki/Bull
Bull
A bull is an intact (i.e., not castrated) adult male of the species Bos taurus (cattle). More muscular and aggressive than the females of the same species (i.e. cows proper), bulls have long been an important symbol in many religions, including for sacrifices. These animals play a significant role in beef ranching, dairy farming, and a variety of sporting and cultural activities, including bullfighting and bull riding. Due to their temperament, handling of bulls requires precautions. Nomenclature The female counterpart to a bull is a cow, while a male of the species that has been castrated is a steer, ox, or bullock, although in North America, this last term refers to a young bull. Use of these terms varies considerably with area and dialect. Colloquially, people unfamiliar with cattle may also refer to steers and heifers as "cows", and bovines of aggressive or long-horned breeds as "bulls" regardless of sex. A wild, young, unmarked bull is known as a micky in Australia. Improper or late castration on a bull results in him becoming a coarse steer, also known as a stag in Australia, Canada, and New Zealand. In some countries, an incompletely castrated male is known also as a rig or ridgling. The word "bull" also denotes the males of other bovines, including bison and water buffalo, as well as many other species of large animals, including elephants, rhinos, seals and walruses, hippos, camels, giraffes, elk, moose, whales, dolphins, and antelopes. Characteristics Bulls are much more muscular than cows, with thicker bones, larger feet, a very muscular neck, and a large, bony head with protective ridges over the eyes. These features assist bulls in fighting for domination over a herd, giving the winner superior access to cows for reproduction. The hair is generally shorter on the body, but the neck and head often have a "mane" of curlier, wooly hair. Bulls are usually about the same height as cows or a little taller, but because of the additional muscle and bone mass, they often weigh far more. Most of the time, a bull has a hump on his shoulders. In horned cattle, the horns of bulls tend to be thicker and somewhat shorter than those of cows, and in many breeds, they curve outwards in a flat arc rather than upwards in a lyre shape. It is not true, as is commonly believed, that bulls have horns and cows do not: the presence of horns depends on the breed, or in horned breeds on whether the horns have been disbudded. (It is true, however, that in many breeds of sheep only the males have horns.) Cattle that naturally do not have horns are referred to as polled, or muleys. Castrated male cattle are physically similar to females in build and horn shape, although if allowed to reach maturity, they may be considerably taller than either bulls or cows, with heavily muscled shoulders and necks. Reproductive anatomy Bulls become fertile around seven months of age. Their fertility is closely related to the size of their testicles, and one simple test of fertility is to measure the circumference of the scrotum; a young bull is likely to be fertile once this reaches ; that of a fully adult bull may be over . Bulls have a fibroelastic penis. Given the small amount of erectile tissue, little enlargement occurs after erection. The penis is quite rigid when not erect, and becomes even more rigid during erection. Protrusion is not affected much by erection, but more by relaxation of the retractor penis muscle and straightening of the sigmoid flexure. Bulls are occasionally affected by a condition known as "corkscrew penis". The penis of a mature bull is about in diameter, and in length. The bull's glans penis has a rounded and elongated shape. Misconceptions A common misconception widely repeated in depictions of bull behavior is that the color red angers bulls, inciting them to charge. In fact, like most mammals, cattle are red–green color blind. In bullfighting, the movement of the matador's cape, and not the color, provokes a reaction in the bull. Management Beef production Other than the few bulls needed for breeding, the vast majority of male cattle are castrated and slaughtered for meat before the age of three years, except where they are needed (castrated) as work oxen for haulage. Most of these beef animals are castrated as calves to reduce aggressive behavior and prevent unwanted mating, although some are reared as uncastrated bull beef. A bull is typically ready for slaughter one or two months sooner than a castrated male or a female, and produces proportionately more and leaner muscle. Frame score is a useful way of describing the skeletal size of bulls and other cattle. Frame scores can be used as an aid to predict mature cattle sizes and aid in the selection of beef bulls. They are calculated from hip height and age. In sales catalogues, this measurement is frequently reported in addition to weight and other performance data such as estimated breed value. Temperament and handling Adult bulls may weigh between . Most are capable of aggressive behavior and require careful handling to ensure the safety of humans and other animals. Those of dairy breeds may be more prone to aggression, while beef breeds are somewhat less aggressive, though beef breeds such as the Spanish Fighting Bull and related animals are also noted for aggressive tendencies, which are further encouraged by selective breeding. An estimated 42% of all livestock-related fatalities in Canada are a result of bull attacks, and fewer than one in 20 victims of a bull attack survives. Dairy breed bulls are particularly dangerous and unpredictable; the hazards of bull handling are a significant cause of injury and death for dairy farmers in some parts of the United States. The need to move a bull in and out of its pen to cover cows exposes the handler to serious jeopardy of life and limb. Being trampled, jammed against a wall, or gored by a bull was one of the most frequent causes of death in the dairy industry before 1940. With regard to such risks, one popular farming magazine has suggested, "Handle the bull with a staff and take no chances. The gentle bull, not the vicious one, most often kills or maims his keeper". Handling In many areas, placing rings in bulls' noses to help control them is traditional. The ring is usually made of copper, and is inserted through a small hole cut in the septum of the nose. It is used by attaching a lead rope either directly to it or running through it from a head collar, or for more difficult bulls, a bull pole (or bull staff) may be used. This is a rigid pole about long with a clip at one end; this attaches to the ring and allows the bull both to be led and to be held away from his handler. An aggressive bull may be kept confined in a bull pen, a robustly constructed shelter and pen, often with an arrangement to allow the bull to be fed without entering the pen. If an aggressive bull is allowed to graze outside, additional precautions may be needed to help avoid him harming people. One method is a bull mask, which either covers the bull's eyes completely, or restricts his vision to the ground immediately in front of him, so he cannot see his potential victim. Another method is to attach a length of chain to the bull's nose-ring, so that if he ducks his head to charge, he steps on the chain and is brought up short. Alternatively, the bull may be hobbled, or chained by his ring or by a collar to a solid object such as a ring fixed into the ground. In larger pastures, particularly where a bull is kept with other cattle, the animals may simply be fed from a pickup truck or tractor, the vehicle itself providing some protection for the humans involved. Generally, bulls kept with cows tend to be less aggressive than those kept alone. In herd situations, cows with young calves are often more dangerous to humans. In the off season, multiple bulls may be kept together in a "bachelor herd". Artificial insemination Many cattle ranches and stations run bulls with cows, and most dairy or beef farms traditionally had at least one, if not several, bulls for purposes of herd maintenance. However, the problems associated with handling a bull (particularly where cows must be removed from his presence to be worked) has prompted many dairy farmers to restrict themselves to artificial insemination (AI) of the cows. Semen is removed from the bulls and stored in canisters of liquid nitrogen, where it is kept until it can be sold, at which time it can be very profitable; in fact, many ranchers keep bulls specifically for this purpose. AI is also used to improve the quality of a herd, or to introduce an outcross of bloodlines. Some ranchers prefer to use AI to allow them to breed to several different bulls in a season or to breed their best stock to a higher-quality bull than they could afford to purchase outright. AI may also be used in conjunction with embryo transfer to allow cattle producers to add new breeding to their herds. Relationship with humans Aside from their reproductive duties, bulls are also used in certain sports, including bullfighting and bull-riding. They are also incorporated into festivals and folk events such as the Running of the Bulls and were seen in ancient sports such as bull-leaping. Though less common than castrated males, bulls are used as draught oxen in some areas. The once-popular sport of bull-baiting, in which a bull is attacked by specially bred and trained dogs (which came to be known as bulldogs), was banned in England by the Cruelty to Animals Act 1835. As with other animals, some bulls have been regarded as pets. The singer Charo, for instance, has owned a pet bull named Manolo. Significance in human culture Sacred bulls have held a place of significance in human culture since before the beginning of recorded history. They appear in cave paintings estimated to be up to 17,000 years old. The mythic Bull of Heavens plays a role in the ancient Sumerian Epic of Gilgamesh, dating as far back as 2150 BC. The importance of the bull is reflected in its appearance in the zodiac as Taurus, and its numerous appearances in mythology, where it is often associated with fertility.
Biology and health sciences
Bovidae
Animals
33601818
https://en.wikipedia.org/wiki/Thorax%20%28arthropod%20anatomy%29
Thorax (arthropod anatomy)
The thorax is the midsection (tagma) of the hexapod body (insects and entognathans). It holds the head, legs, wings and abdomen. It is also called mesosoma or cephalothorax in other arthropods. It is formed by the prothorax, mesothorax and metathorax and comprises the scutellum; the cervix, a membrane that separates the head from the thorax; and the pleuron, a lateral sclerite of the thorax. In dragonflies and damselflies, the mesothorax and metathorax are fused together to form the synthorax. In some insect pupae, like the mosquitoes', the head and thorax can be fused in a cephalothorax. Members of suborder Apocrita (wasps, ants and bees) in the order Hymenoptera have the first segment of the abdomen fused with the thorax, which is called the propodeum. The head is connected to the thorax by the occipital foramen, enabling a wide range of motion for the head. In most flying insects, the thorax allows for the use of asynchronous muscles.
Biology and health sciences
External anatomy and regions of the body
Biology
36224143
https://en.wikipedia.org/wiki/Planetary%20science
Planetary science
Planetary science (or more rarely, planetology) is the scientific study of planets (including Earth), celestial bodies (such as moons, asteroids, comets) and planetary systems (in particular those of the Solar System) and the processes of their formation. It studies objects ranging in size from micrometeoroids to gas giants, with the aim of determining their composition, dynamics, formation, interrelations and history. It is a strongly interdisciplinary field, which originally grew from astronomy and Earth science, and now incorporates many disciplines, including planetary geology, cosmochemistry, atmospheric science, physics, oceanography, hydrology, theoretical planetary science, glaciology, and exoplanetology. Allied disciplines include space physics, when concerned with the effects of the Sun on the bodies of the Solar System, and astrobiology. There are interrelated observational and theoretical branches of planetary science. Observational research can involve combinations of space exploration, predominantly with robotic spacecraft missions using remote sensing, and comparative, experimental work in Earth-based laboratories. The theoretical component involves considerable computer simulation and mathematical modelling. Planetary scientists are generally located in the astronomy and physics or Earth sciences departments of universities or research centres, though there are several purely planetary science institutes worldwide. Generally, planetary scientists study one of the Earth sciences, astronomy, astrophysics, geophysics, or physics at the graduate level and concentrate their research in planetary science disciplines. There are several major conferences each year, and a wide range of peer reviewed journals. Some planetary scientists work at private research centres and often initiate partnership research tasks. History The history of planetary science may be said to have begun with the Ancient Greek philosopher Democritus, who is reported by Hippolytus as saying The ordered worlds are boundless and differ in size, and that in some there is neither sun nor moon, but that in others, both are greater than with us, and yet with others more in number. And that the intervals between the ordered worlds are unequal, here more and there less, and that some increase, others flourish and others decay, and here they come into being and there they are eclipsed. But that they are destroyed by colliding with one another. And that some ordered worlds are bare of animals and plants and all water. In more modern times, planetary science began in astronomy, from studies of the unresolved planets. In this sense, the original planetary astronomer would be Galileo, who discovered the four largest moons of Jupiter, the mountains on the Moon, and first observed the rings of Saturn, all objects of intense later study. Galileo's study of the lunar mountains in 1609 also began the study of extraterrestrial landscapes: his observation "that the Moon certainly does not possess a smooth and polished surface" suggested that it and other worlds might appear "just like the face of the Earth itself". Advances in telescope construction and instrumental resolution gradually allowed increased identification of the atmospheric as well as surface details of the planets. The Moon was initially the most heavily studied, due to its proximity to the Earth, as it always exhibited elaborate features on its surface, and the technological improvements gradually produced more detailed lunar geological knowledge. In this scientific process, the main instruments were astronomical optical telescopes (and later radio telescopes) and finally robotic exploratory spacecraft, such as space probes. The Solar System has now been relatively well-studied, and a good overall understanding of the formation and evolution of this planetary system exists. However, there are large numbers of unsolved questions, and the rate of new discoveries is very high, partly due to the large number of interplanetary spacecraft currently exploring the Solar System. Disciplines Planetary science studies observational and theoretical astronomy, geology (astrogeology), atmospheric science, and an emerging subspecialty in planetary oceans, called planetary oceanography. Planetary astronomy This is both an observational and a theoretical science. Observational researchers are predominantly concerned with the study of the small bodies of the Solar System: those that are observed by telescopes, both optical and radio, so that characteristics of these bodies such as shape, spin, surface materials and weathering are determined, and the history of their formation and evolution can be understood. Theoretical planetary astronomy is concerned with dynamics: the application of the principles of celestial mechanics to the Solar System and extrasolar planetary systems. Observing exoplanets and determining their physical properties, exoplanetology, is a major area of research besides Solar System studies. Every planet has its own branch. Planetary geology In planetary science, the term geology is used in its broadest sense, to mean the study of the surface and interior parts of planets and moons, from their core to their magnetosphere. The best-known research topics of planetary geology deal with the planetary bodies in the near vicinity of the Earth: the Moon, and the two neighboring planets: Venus and Mars. Of these, the Moon was studied first, using methods developed earlier on the Earth. Planetary geology focuses on celestial objects that exhibit a solid surface or have significant solid physical states as part of their structure. Planetary geology applies geology, geophysics and geochemistry to planetary bodies. Planetary geomorphology Geomorphology studies the features on planetary surfaces and reconstructs the history of their formation, inferring the physical processes that acted on the surface. Planetary geomorphology includes the study of several classes of surface features: Impact features (multi-ringed basins, craters) Volcanic and tectonic features (lava flows, fissures, rilles) Glacial features Aeolian features Space weathering – erosional effects generated by the harsh environment of space (continuous micrometeorite bombardment, high-energy particle rain, impact gardening). For example, the thin dust cover on the surface of the lunar regolith is a result of micrometeorite bombardment. Hydrological features: the liquid involved can range from water to hydrocarbon and ammonia, depending on the location within the Solar System. This category includes the study of paleohydrological features (paleochannels, paleolakes). The history of a planetary surface can be deciphered by mapping features from top to bottom according to their deposition sequence, as first determined on terrestrial strata by Nicolas Steno. For example, stratigraphic mapping prepared the Apollo astronauts for the field geology they would encounter on their lunar missions. Overlapping sequences were identified on images taken by the Lunar Orbiter program, and these were used to prepare a lunar stratigraphic column and geological map of the Moon. Cosmochemistry, geochemistry and petrology One of the main problems when generating hypotheses on the formation and evolution of objects in the Solar System is the lack of samples that can be analyzed in the laboratory, where a large suite of tools are available, and the full body of knowledge derived from terrestrial geology can be brought to bear. Direct samples from the Moon, asteroids and Mars are present on Earth, removed from their parent bodies, and delivered as meteorites. Some of these have suffered contamination from the oxidising effect of Earth's atmosphere and the infiltration of the biosphere, but those meteorites collected in the last few decades from Antarctica are almost entirely pristine. The different types of meteorites that originate from the asteroid belt cover almost all parts of the structure of differentiated bodies: meteorites even exist that come from the core-mantle boundary (pallasites). The combination of geochemistry and observational astronomy has also made it possible to trace the HED meteorites back to a specific asteroid in the main belt, 4 Vesta. The comparatively few known Martian meteorites have provided insight into the geochemical composition of the Martian crust, although the unavoidable lack of information about their points of origin on the diverse Martian surface has meant that they do not provide more detailed constraints on theories of the evolution of the Martian lithosphere. As of July 24, 2013, 65 samples of Martian meteorites have been discovered on Earth. Many were found in either Antarctica or the Sahara Desert. During the Apollo era, in the Apollo program, 384 kilograms of lunar samples were collected and transported to the Earth, and three Soviet Luna robots also delivered regolith samples from the Moon. These samples provide the most comprehensive record of the composition of any Solar System body besides the Earth. The numbers of lunar meteorites are growing quickly in the last few years – as of April 2008 there are 54 meteorites that have been officially classified as lunar. Eleven of these are from the US Antarctic meteorite collection, 6 are from the Japanese Antarctic meteorite collection and the other 37 are from hot desert localities in Africa, Australia, and the Middle East. The total mass of recognized lunar meteorites is close to 50 kg. Planetary geophysics and space physics Space probes made it possible to collect data in not only the visible light region but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics. Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins. If a planet's magnetic field is sufficiently strong, its interaction with the solar wind forms a magnetosphere around a planet. Early space probes discovered the gross dimensions of the terrestrial magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles, the Van Allen radiation belts. Planetary geophysics includes, but is not limited to, seismology and tectonophysics, geophysical fluid dynamics, mineral physics, geodynamics, mathematical geophysics, and geophysical surveying. Planetary geodesy Planetary geodesy (also known as planetary geodetics) deals with the measurement and representation of the planets of the Solar System, their gravitational fields and geodynamic phenomena (polar motion in three-dimensional, time-varying space). The science of geodesy has elements of both astrophysics and planetary sciences. The shape of the Earth is to a large extent the result of its rotation, which causes its equatorial bulge, and the competition of geologic processes such as the collision of plates and of vulcanism, resisted by the Earth's gravity field. These principles can be applied to the solid surface of Earth (orogeny; Few mountains are higher than , few deep sea trenches deeper than that because quite simply, a mountain as tall as, for example, , would develop so much pressure at its base, due to gravity, that the rock there would become plastic, and the mountain would slump back to a height of roughly in a geologically insignificant time. Some or all of these geologic principles can be applied to other planets besides Earth. For instance on Mars, whose surface gravity is much less, the largest volcano, Olympus Mons, is high at its peak, a height that could not be maintained on Earth. The Earth geoid is essentially the figure of the Earth abstracted from its topographic features. Therefore, the Mars geoid (areoid) is essentially the figure of Mars abstracted from its topographic features. Surveying and mapping are two important fields of application of geodesy. Planetary atmospheric science An atmosphere is an important transitional zone between the solid planetary surface and the higher rarefied ionizing and radiation belts. Not all planets have atmospheres: their existence depends on the mass of the planet, and the planet's distance from the Sun – too distant and frozen atmospheres occur. Besides the four giant planets, three of the four terrestrial planets (Earth, Venus, and Mars) have significant atmospheres. Two moons have significant atmospheres: Saturn's moon Titan and Neptune's moon Triton. A tenuous atmosphere exists around Mercury. The effects of the rotation rate of a planet about its axis can be seen in atmospheric streams and currents. Seen from space, these features show as bands and eddies in the cloud system and are particularly visible on Jupiter and Saturn. Planetary oceanography Exoplanetology Exoplanetology studies exoplanets, the planets existing outside our Solar System. Until recently, the means of studying exoplanets have been extremely limited, but with the current rate of innovation in research technology, exoplanetology has become a rapidly developing subfield of astronomy. Comparative planetary science Planetary science frequently makes use of the method of comparison to give a greater understanding of the object of study. This can involve comparing the dense atmospheres of Earth and Saturn's moon Titan, the evolution of outer Solar System objects at different distances from the Sun, or the geomorphology of the surfaces of the terrestrial planets, to give only a few examples. The main comparison that can be made is to features on the Earth, as it is much more accessible and allows a much greater range of measurements to be made. Earth analog studies are particularly common in planetary geology, geomorphology, and also in atmospheric science. The use of terrestrial analogs was first described by Gilbert (1886). In fiction In Frank Herbert's 1965 science fiction novel Dune, the major secondary character Liet-Kynes serves as the "Imperial Planetologist" for the fictional planet Arrakis, a position he inherited from his father Pardot Kynes. In this role, a planetologist is described as having skills of an ecologist, geologist, meteorologist, and biologist, as well as basic understandings of human sociology. The planetologists apply this expertise to the study of entire planets.In the Dune series, planetologists are employed to understand planetary resources and to plan terraforming or other planetary-scale engineering projects. This fictional position in Dune has had an impact on the discourse surrounding planetary science itself and is referred to by one author as a "touchstone" within the related disciplines. In one example, a publication by Sybil P. Seitzinger in the journal Nature opens with a brief introduction on the fictional role in Dune, and suggests we should consider appointing individuals with similar skills to Liet-Kynes to help with managing human activity on Earth. Professional activity Journals Annual Review of Earth and Planetary Sciences Earth and Planetary Science Letters Earth, Moon, and Planets Geochimica et Cosmochimica Acta Icarus Journal of Geophysical Research – Planets Meteoritics and Planetary Science Planetary and Space Science The Planetary Science Journal Professional bodies This non-exhaustive list includes those institutions and universities with major groups of people working in planetary science. Alphabetical order is used. Division for Planetary Sciences (DPS) of the American Astronomical Society American Geophysical Union Meteoritical Society Europlanet Government space agencies Canadian Space Agency (CSA) China National Space Administration (CNSA, People's Republic of China). French National Centre of Space Research Deutsches Zentrum für Luft- und Raumfahrt e.V., (German: abbreviated DLR), the German Aerospace Center European Space Agency (ESA) Indian Space Research Organisation (ISRO) Israel Space Agency (ISA) Italian Space Agency Japan Aerospace Exploration Agency (JAXA) NASA (National Aeronautics and Space Administration, United States of America) JPL GSFC Ames National Space Organization (Taiwan). Russian Federal Space Agency UK Space Agency (UKSA). Major conferences Lunar and Planetary Science Conference (LPSC), organized by the Lunar and Planetary Institute in Houston. Held annually since 1970, occurs in March. Division for Planetary Sciences (DPS) meeting held annually since 1970 at a different location each year, predominantly within the mainland US. Occurs around October. American Geophysical Union (AGU) annual Fall meeting in December in San Francisco. American Geophysical Union (AGU) Joint Assembly (co-sponsored with other societies) in April–May, in various locations around the world. Meteoritical Society annual meeting, held during the Northern Hemisphere summer, generally alternating between North America and Europe. European Planetary Science Congress (EPSC), held annually around September at a location within Europe. Smaller workshops and conferences on particular fields occur worldwide throughout the year.
Physical sciences
Planetary science
Astronomy
26377919
https://en.wikipedia.org/wiki/Sequoioideae
Sequoioideae
Sequoioideae, commonly referred to as redwoods, is a subfamily of coniferous trees within the family Cupressaceae, that range in the northern hemisphere. It includes the largest and tallest trees in the world. The trees in the subfamily are amongst the most notable trees in the world and are common ornamental trees. Description The three redwood subfamily genera are Sequoia from coastal California and Oregon, Sequoiadendron from California's Sierra Nevada, and Metasequoia in China. The redwood species contains the largest and tallest trees in the world. These trees can live for thousands of years. Threats include logging, fire suppression, illegal marijuana cultivation, and burl poaching. Only two of the genera, Sequoia and Sequoiadendron, are known for massive trees. Trees of Metasequoia, from the single living species Metasequoia glyptostroboides, are deciduous, grow much smaller (although are still large compared to most other trees) and can live in colder climates. Taxonomy and evolution Multiple studies of both morphological and molecular characters have strongly supported the assertion that the Sequoioideae are monophyletic. Most modern phylogenies place Sequoia as sister to Sequoiadendron and Metasequoia as the out-group. However, Yang et al. went on to investigate the origin of a peculiar genetic component in Sequoioideae, the polyploidy of Sequoia—and generated a notable exception that calls into question the specifics of this relative consensus. Cladistic tree A 2006 paper based on non-molecular evidence suggested the following relationship among extant species: A 2021 study using molecular evidence found the same relationships among Sequoioideae species, but found Sequoioideae to be the sister group to the Athrotaxidoideae (a superfamily presently known only from Tasmania) rather than to Taxodioideae. Sequoioideae and Athrotaxidoideae are thought to have diverged from each other during the Jurassic. Possible reticulate evolution in Sequoioideae Reticulate evolution refers to the origination of a taxon through the merging of ancestor lineages. Polyploidy has come to be understood as quite common in plants—with estimates ranging from 47% to 100% of flowering plants and extant ferns having derived from ancient polyploidy. Within the gymnosperms however it is quite rare. Sequoia sempervirens is hexaploid (2n= 6x= 66). To investigate the origins of this polyploidy Yang et al. used two single copy nuclear genes, LFY and NLY, to generate phylogenetic trees. Other researchers have had success with these genes in similar studies on different taxa. Several hypotheses have been proposed to explain the origin of Sequoia's polyploidy: allopolyploidy by hybridization between Metasequoia and some probably extinct taxodiaceous plant; Metasequoia and Sequoiadendron, or ancestors of the two genera, as the parental species of Sequoia; and autohexaploidy, autoallohexaploidy, or segmental allohexaploidy. Yang et al. found that Sequoia was clustered with Metasequoia in the tree generated using the LFY gene but with Sequoiadendron in the tree generated with the NLY gene. Further analysis strongly supported the hypothesis that Sequoia was the result of a hybridization event involving Metasequoia and Sequoiadendron. Thus, Yang et al. hypothesize that the inconsistent relationships among Metasequoia, Sequoia, and Sequoiadendron could be a sign of reticulate evolution by hybrid speciation (in which two species hybridize and give rise to a third) among the three genera. However, the long evolutionary history of the three genera (the earliest fossil remains being from the Jurassic) make resolving the specifics of when and how Sequoia originated once and for all a difficult matter—especially since it in part depends on an incomplete fossil record. Extant species Metasequoia glyptostroboides - Dawn redwood; south-central China. Sequoiadendron giganteum - Giant sequoia, Giant redwood; western slopes of the Sierra Nevadas; California. Sequoia sempervirens - Coastal Redwood, California redwood; Northern California coast and extreme Southern Oregon. Paleontology Sequoioideae is an ancient taxon, with the oldest described Sequoioideae species, Sequoia jeholensis, recovered from Jurassic deposits. The fossil wood Medulloprotaxodioxylon, reported from the late Triassic of China, resembles Sequoiadendron giganteum and may represent an ancestral form of the Sequoioideae; this supports the idea of a Late Triassic Norian origin for this subfamily. The fossil record shows a massive expansion of range in the Cretaceous and dominance of the Arcto-Tertiary Geoflora, especially in northern latitudes. Genera of Sequoioideae were found in the Arctic Circle, Europe, North America, and throughout Asia and Japan. A general cooling trend beginning in the late Eocene and Oligocene reduced the northern ranges of the Sequoioideae, as did subsequent ice ages. Evolutionary adaptations to ancient environments persist in all three species despite changing climate, distribution, and associated flora, especially the specific demands of their reproduction ecology that ultimately forced each of the species into refugial ranges where they could survive. The extinct genus Austrosequoia, known from the Late Cretaceous-Oligocene of the Southern Hemisphere, including Australia and New Zealand, has been suggested as a member of the subfamily. Conservation In 2024, it was estimated that there were about 500,000 redwoods in Britain, mostly brought as seeds and seedlings from the US in the Victorian era. The entire subfamily is endangered. The IUCN Red List Category & Criteria assesses Sequoia sempervirens as Endangered (A2acd), Sequoiadendron giganteum as Endangered (B2ab) and Metasequoia glyptostroboides as Endangered (B1ab). In 2024 it was reported that over a period of two years about one-fifth of all giant sequoias were destroyed in extreme wildfires in California.
Biology and health sciences
Gymnosperms
null
22026933
https://en.wikipedia.org/wiki/Taxi
Taxi
A taxi, also known as a taxicab or simply a cab, is a type of vehicle for hire with a driver, used by a single passenger or small group of passengers, often for a non-shared ride. A taxicab conveys passengers between locations of their choice. This differs from public transport where the pick-up and drop-off locations are decided by the service provider, not by the customers, although demand responsive transport and share taxis provide a hybrid bus/taxi mode. There are four distinct forms of taxicab, which can be identified by slightly differing terms in different countries: Hackney carriages, also known as public hire, hailed or street taxis, licensed for hailing throughout communities Private hire vehicles, also known as minicabs or private hire taxis, licensed for pre-booking only Taxibuses, also come in many variations throughout the developing countries as jitneys or jeepney, operating on pre-set routes typified by multiple stops and multiple independent passengers Limousines, specialized vehicle licensed for operation by pre-booking Although types of vehicles and methods of regulation, hiring, dispatching, and negotiating payment differ significantly from country to country, many common characteristics exist. Disputes over whether ridesharing companies should be regulated as taxicabs resulted in some jurisdictions creating new regulations for these services. Etymology The word taxicab is a compound word formed as a contraction of taximeter and cabriolet. Taximeter is an adaptation of the German word Taxameter, which is itself a variant of the earlier German word Taxanom. Taxe /ˈtaksə/ is a German word meaning "tax", "charge", or "scale of charges". The Medieval Latin word taxa also means tax or charge. Taxi may ultimately be attributed to Ancient Greek τάξις from τάσσω meaning "to place in a certain order," as in commanding an orderly battle line, or in ordaining the payment of taxes, to the extent that (taxidi), meaning "journey" in Modern Greek, initially denoted an orderly military march or campaign. Meter is from the Greek (metron) meaning "measure." A cabriolet is a type of horse-drawn carriage; the word comes from French cabrioler ("to leap, caper"), from Italian capriolare ("to somersault"), from Latin capreolus ("roebuck", "wild goat"). In most European languages that word has taken on the meaning of a convertible car. The taxicabs of Paris were equipped with the first meters beginning on 9 March 1898. They were originally called taxamètres, then renamed taximètres on 17 October 1904. Harry Nathaniel Allen of The New York Taxicab Company, who imported the first 600 gas-powered New York City taxicabs from France in 1907, borrowed the word "taxicab" from London, where the word was in use by early 1907. A popular but erroneous account holds that the vehicles were named after Franz von Taxis from the house of Thurn and Taxis, a 16th-century postmaster for Philip of Burgundy, and his nephew Johann Baptiste von Taxis, General Postmaster for the Holy Roman Empire. Both instituted fast and reliable postal services (conveying letters, with some post routes transporting people) across Europe. Their surname derives from their 13th-century ancestor Omodeo Tasso. History Hackney carriages Horse-drawn for-hire hackney carriage services began operating in both Paris and London in the early 17th century. The first documented public hackney coach service for hire was in London in 1605. In 1625 carriages were made available for hire from innkeepers in London and the first taxi rank appeared on the Strand outside the Maypole Inn in 1636. In 1635 the Hackney Carriage Act was passed by Parliament to legalise horse-drawn carriages for hire. Coaches were hired out by innkeepers to merchants and visitors. A further "Ordinance for the Regulation of Hackney-Coachmen in London and the places adjacent" was approved by Parliament in 1654 and the first hackney-carriage licences were issued in 1662. A similar service was started by Nicolas Sauvage in Paris in 1637. His vehicles were known as fiacres, as the main vehicle depot apparently was opposite a shrine to Saint Fiacre. (The term fiacre is still used in French to describe a horse-drawn vehicle for hire, while the German term Fiaker is used, especially in Austria, to refer to the same thing.) Hansoms The hansom cab was designed and patented in 1834 by Joseph Hansom, an architect from York as a substantial improvement on the old hackney carriages. These two-wheel vehicles were fast, light enough to be pulled by a single horse (making the journey cheaper than travelling in a larger four-wheel coach) were agile enough to steer around horse-drawn vehicles in the notorious traffic jams of nineteenth-century London and had a low centre of gravity for safe cornering. Hansom's original design was modified by John Chapman and several others to improve its practicability, but retained Hansom's name. These soon replaced the hackney carriage as a vehicle for hire. They quickly spread to other cities in the United Kingdom, as well as continental European cities, particularly Paris, Berlin, and St Petersburg. The cab was introduced to other British Empire cities and to the United States during the late 19th century, being most commonly used in New York City. The first cab service in Toronto, "The City", was established in 1837 by Thornton Blackburn, an ex-slave whose escape when captured in Detroit was the impetus for the Blackburn Riots. Modern taxicabs The modern taximeter was invented and perfected by a trio of German inventors; Wilhelm Friedrich Nedler, Ferdinand Dencker and Friedrich Wilhelm Gustav Bruhn. The Daimler Victoria—the world's first motorized-powered taximeter-cab—was built by Gottlieb Daimler in 1897 and began operating in Stuttgart in June 1897. Gasoline-powered taxicabs began operating in Paris in 1899, in London in 1903, and in New York in 1907. The New York taxicabs were initially imported from France by Harry N. Allen, owner of the Allen-Kingston Motor Car Company. Their manufacturing took place at Bristol Engineering in Bristol, Connecticut where the first domestically produced Taxicabs were built in 1908, designed by Fred E. Moskovics who had worked at Daimler in the late 1890s. Albert F. Rockwell was the owner of Bristol and his wife suggested he paint his taxicabs yellow to maximise his vehicles' visibility. Moskovics was one of the organizers of the first Yellow Taxicab Company in New York. Electric battery-powered taxis became available at the end of the 19th century. In London, Walter Bersey designed a fleet of such cabs and introduced them to the streets of London on 19 August 1897. They were soon nicknamed 'Hummingbirds' due to the idiosyncratic humming noise they made. In the same year in New York City, the Samuel's Electric Carriage and Wagon Company began running 12 electric hansom cabs. The company ran until 1898 with up to 62 cabs operating until it was reformed by its financiers to form the Electric Vehicle Company. Taxicabs proliferated around the world in the early 20th century. The first major innovation after the invention of the taximeter occurred in the late 1940s, when two-way radios first appeared in taxicabs. Radios enabled taxicabs and dispatch offices to communicate and serve customers more efficiently than previous methods, such as using callboxes. The next major innovation occurred in the 1980s when computer assisted dispatching was first introduced. As military and emergency transport Paris taxis played a memorable part in the French victory at First Battle of the Marne in the First World War. On 7 September 1914, the Military Governor of Paris, Joseph Gallieni, gathered about six hundred taxicabs at Les Invalides in central Paris to carry soldiers to the front at Nanteuil-le Haudouin, fifty kilometers away. Within twenty-four hours about six thousand soldiers and officers were moved to the front. Each taxi carried five soldiers, four in the back and one next to the driver. Only the back lights of the taxis were lit; the drivers were instructed to follow the lights of the taxi ahead. The Germans were surprised and were pushed back by the French and British armies. Most of the taxis were demobilized on 8 September but some remained longer to carry the wounded and refugees. The taxis, following city regulations, dutifully ran their meters. The French treasury reimbursed the total fare of 70,012 francs. The military impact of the soldiers moved by taxi was small in the huge scale of the Battle of the Marne, but the effect on French morale was enormous; it became the symbol of the solidarity between the French army and citizens. It was also the first recorded large-scale use of motorized infantry in battle. The Birmingham pub bombings on 21 November 1974, which killed 21 people and injured 182, presented emergency services with unprecedented peacetime demands. According to eyewitness accounts, the fire officer in charge, knowing the 40 ambulances he requested were unlikely to be available, requested the Taxi Owners Association to transport the injured to the nearby Birmingham Accident Hospital and Birmingham General Hospital. Vehicles Taxi services are typically provided by automobiles, but in some countries various human-powered vehicles, (such as the rickshaw or pedicab) and animal-powered vehicles (such as the Hansom cab) or even boats (such as water taxies or gondolas) are also used or have been used historically. In Western Europe, Bissau, and to an extent, Australia, it is not uncommon for expensive cars such as Mercedes-Benz to be the taxicab of choice. Often this decision is based upon the perceived reliability of, and warranty offered with these vehicles. These taxi-service vehicles are almost always equipped with four-cylinder turbodiesel engines and relatively low levels of equipment, and are not considered luxury cars. This has changed though in countries such as Denmark, where tax regulation makes it profitable to sell the vehicles after a few years of service, which requires the cars to be well equipped and kept in good condition. Cities like London and Tokyo have implemented specific regulations like London's Conditions of Fitness that dictate size, fuel efficiency, emissions, and accessibility standards far stricter than that for private vehicles. Much like the NY Checker cabs of the 60s–80s, the unique attributes of the city often make the vehicles built to fit those requirements ubiquitous to its livery fleets, and often becomes an iconic image of the city itself. Although New York City's efforts to implement new regulations has stumbled in its efforts to mandate both a hybrid and wheelchair-accessible vehicle, London and Tokyo's efforts have yielded unique vehicles such the LEVC TX and Toyota JPN Taxi that meet and exceed modern emissions and accessibility requirements for the future, and will hopefully soon extend to other cities as older models get rotated out of the bigger cities and into smaller markets. Modifications of existing minivans such as the Mercedes Vito London Taxi and the Nissan NV200 have been introduced as a stopgap measures to fill the need for alternative products, however their acceptance by drivers is yet to be seen. Wheelchair-accessible taxicabs In recent years, some companies have been adding specially modified vehicles capable of transporting wheelchair-using passengers to their fleets. Such taxicabs are variously called accessible taxis, wheelchair- or wheelchair-accessible taxicabs, modified taxicabs, or "maxicabs". Wheelchair taxicabs are most often specially modified vans or minivans. Wheelchair-using passengers are loaded, with the help of the driver, via a lift or, more commonly, a ramp, at the rear of the vehicle. This feature is however a subject for concern amongst Licensing Authorities who feel that the wheelchair passenger could not easily exit the vehicle in the event of accident damage to the rear door. The latest generation of accessible taxis features side loading with emergency egress possible from either of the 2 side doors as well as the rear. The wheelchair is secured using various systems, commonly including some type of belt and clip combination, or wheel locks. Some wheelchair taxicabs are capable of transporting only one wheelchair-using passenger at a time, and can usually accommodate 4 to 6 additional non-disabled passengers. Wheelchair taxicabs are part of the regular fleet in most cases, and so are not reserved exclusively for the use of wheelchair users. They are often used by non-disabled people who need to transport luggage, small items of furniture, animals, and other items. Because of this, and since only a small percentage of the average fleet is modified, wheelchair users must often wait for significantly longer periods when calling for a cab, and flagging a modified taxicab on the street is much more difficult. London's taxis have been fully accessible since January 2000, with all taxis fitted with a pull-out or portable ramp. Other Taxicabs in less developed places can be a completely different experience, such as the antique French cars typically found in Cairo. However, starting in March 2006, newer modern taxicabs entered the service operated by various private companies. Taxicabs differ in other ways as well: London's black cabs have a large compartment beside the driver for storing bags, while many fleets of regular taxis also include wheelchair accessible taxicabs among their numbers (see above). Although taxicabs have traditionally been sedans, minivans, hatchbacks and even SUV taxicabs are becoming increasingly common. In many cities, limousines operate as well, usually in competition with taxicabs and at higher fares. Recently, with growing concern for the environment, there have been solar powered taxicabs. On 20 April 2008, a "solar taxi tour" was launched that aimed to tour 15 countries in 18 months in a solar taxi that can reach speeds of 90 km/h with zero emission. The aim of the tour was to spread knowledge about environmental protection. Livery Most taxi companies have some sort of livery on the vehicle, depending on the type of taxi (taxi, cab, private hire, chauffeur), country, region and operator. Hiring Most places allow a taxi to be "hailed" or "flagged" on the side of the street as it is approaching. Another option is a taxi stand (sometimes also called a "cab stand," "hack stand," "taxi rank," or "cab rank"). Taxi stands are usually located at airports, railway stations, major retail areas (malls), hotels and other places where a large number of passengers are likely to be found. In some places—Japan, for example—taxi stands are arranged according to the size of the taxis, so that large- and small-capacity cabs line up separately. The taxi at the front of the line is due (barring unusual circumstances) for the next fare. Passengers also commonly call a central dispatch office for taxis. In some jurisdictions, private hire vehicles can only be hired from the dispatch office, and must be assigned each fare by the office by radio or phone. Picking up passengers off the street in these areas can lead to suspension or revocation of the driver's taxi license, or even prosecution. Other areas may have a mix of the two systems, where drivers may respond to radio calls and also pick up street fares. Passengers may also hire taxicabs via mobile apps. While not directly involving the call center, the taxis are still monitored by the dispatcher through GPS tracking. Many taxicab companies, including Gett, Easy Taxi, and GrabTaxi provide mobile apps. Dispatching The activity of taxi fleets is usually monitored and controlled by a central office, which provides dispatching, accounting, and human resources services to one or more taxi companies. Taxi owners and drivers usually communicate with the dispatch office through either a 2-way radio or a computer terminal (called a mobile data terminal). Before the innovation of radio dispatch in the 1950s, taxi drivers would use a callbox—a special telephone at a taxi stand—to contact the dispatch office. When a customer calls for a taxi, a trip is dispatched by either radio or computer, via an in-vehicle mobile data terminal, to the most suitable cab. The most suitable cab may either be the one closest to the pick-up address (often determined by GPS coordinates nowadays) or the one that was the first to book into the "zone" surrounding the pickup address. Cabs are sometimes dispatched from their taxi stands; a call to "Top of the 2" means that the first cab in line at stand #2 is supposed to pick someone up. In offices using radio dispatch, taxi locations are often tracked using magnetic pegs on a "board"—a metal sheet with an engraved map of taxi zones. In computerized dispatch, the status of taxis is tracked by the computer system. Taxi frequencies are generally licensed in duplex pairs. One frequency is used for the dispatcher to talk to the cabs, and a second frequency is used to the cabs to talk back. This means that the drivers generally cannot talk to each other. Some cabs have a CB radio in addition to the company radio so they can speak to each other. In the United States, there is a Taxicab Radio Service with pairs assigned for this purpose. A taxi company can also be licensed in the Business Radio Service. Business frequencies in the UHF range are also licensed in pairs to allow for repeaters, though taxi companies usually use the pair for duplex communications. Taxi dispatch is evolving in connection to the telecom sector with the advent of smart-phones. In some countries such as Australia, Canada, Germany, the UK and USA, smartphone applications are emerging that connect taxi drivers directly with passengers for the purpose of dispatching taxi jobs, launching new battles for the marketing of such apps over the potential mass of Taxi users. Taxi fares are set by the state and city where they are permitted to operate. The fare includes the 'drop', a set amount that is tallied for getting into the taxi plus the 'per kilometer' rate as has been set by the city. The taxi meters track time as well as distance in an average taxi fare. Drivers and companies In the United States, a nut is industry slang for the amount of money a driver has to pay upfront to lease a taxi for a specific period of time. Once that amount is collected in fare, the driver then begins to make a profit. A driver "on the nut" is trying to earn back the initial cost. This varies from city to city though, in Las Vegas, Nevada, all taxicabs are owned and operated by the companies and all drivers are employees (hence no initial cost and earn a percentage of each fare). So "on the nut" simply means to be next in a taxi stand to receive a passenger. Additionally, some cab companies are owned cooperatively, with profits shared through democratic governance. Regulatory compliance and training Australia Different states have different regulations for taxi driver registration and compliance: New South Wales: There is an annual taxi licence determination which sets the maximum number of taxis allowed in specified areas. To be eligible you must have a taxi licence which is available from ABLIS. The industry body is the NSW Taxi Council and it provides a pathway to becoming a taxi driver. Northern Territory: Apply for a Commercial Passenger Vehicle licence (H endorsement) and ID card. Queensland: Apply for a driver authorisation. South Australia: Apply for South Australian driver accreditation with the SA government then complete training with a registered training provider. Tasmania Victoria: Drivers apply to the Taxi Services Commission to get a driver accreditation Western Australia New Zealand New Zealand taxi drivers fall under the definition of a Small Passenger Service Vehicle driver. They must have a P (passenger) endorsement on their driver licence. Until 1 October 2017, all drivers wanting to obtain a P endorsement had to complete a P endorsement course, but that requirement was removed as a result of lobbying by Uber who had been flouting the law. Drivers must comply with work-time rules and maintain a logbook, with the onus on training falling on companies and drivers since the P endorsement course was abandoned. The New Zealand Taxi Federation is the national advocacy group for taxi companies within New Zealand. Navigation Most experienced taxi drivers who have been working in the same city or region for a while would be expected to know the most important streets and places where their customers request to go. However, to aid the process of manual navigation and the taxi driver's memory (and the customer's as well at times) a cab driver is usually equipped with a detailed roadmap of the area in which they work. There is also an increasing use of GPS driven navigational systems in wealthier countries. In London, despite the complex and haphazard road layout, such aids have only recently been employed by a small number of 'black cab' taxi (as opposed to minicab) drivers. Instead, they are required to undergo a demanding process of learning and testing called The Knowledge. This typically takes around three years and equips them with a detailed command of 25,000 streets within central London, major routes outside this area, and all buildings and other destinations to which passengers may ask to be taken. Environmental concerns Taxicabs have been both criticized for creating pollution and also praised as an environmentally responsible alternative to private car use. One study, published in the journal Atmospheric Environment in January 2006, showed that the level of pollution that Londoners are exposed to differs according to the mode of transport that they use. When in the back seat of a taxicab people were exposed the most, while walking exposing people to the lowest amount of pollution. Alternative energy and propulsion In Australia, nearly all taxis run on LPG, as well as the growing fleet of hybrids. Argentina and the main cities of Brazil have large fleets of taxis running on natural gas. Many Brazilian taxis are flexible-fuel vehicles running on sugarcane ethanol, and some are equipped to run on either natural gas or as a flex-fuel. At least two Brazilian car makers sell these type of bi-fuel vehicles. Malaysia and Singapore have many of their taxicabs running on compressed natural gas (CNG). San Francisco became one of the first cities to introduce hybrids for taxi service in 2005, with a fleet of 15 Ford Escape Hybrids, and by 2009 the original Escape Hybrids were retired after per vehicle. In 2007 the city approved the Clean Air Taxi Grant Program in order to encourage cab companies to purchase alternative fuel vehicles, by providing incentives of US$2,000 per new alternative fuel vehicle on a first-come, first-served basis. Out of a total of 1,378 eligible vehicles (wheelchair-accessible taxi-vans are excluded) 788 are alternative fuel vehicles, representing 57% of the San Francisco's taxicab fleet by March 2010. Gasoline-electric hybrids accounted for 657 green taxis and compressed natural gas vehicles for 131. As of mid-2009 New York City had 2,019 hybrid taxis and 12 clean diesel vehicles, representing 15% of New York's 13,237 taxis in service, the most in any city in North America. At this time owners began retiring its original hybrid fleet after per vehicle. Two attempts by the Bloomberg Administration to implement policies to force the replacement of all New York's 13,000 taxis for hybrids by 2012 have been blocked by court rulings. Chicago is following New York City's lead by proposing a mandate for Chicago's entire fleet of 6,700 taxicabs to become hybrid by 1 January 2014. As of 2008 Chicago's fleet had only 50 hybrid taxicabs. In 2008 Boston mandated that its entire taxi fleet must be converted to hybrids by 2015. Arlington, Virginia also has a small fleet of 85 environmentally friendly hybrid cabs introduced in early 2008. The green taxi expansion is part of a county campaign known as Fresh AIRE, or Arlington Initiative to Reduce Emissions, and included a new all-hybrid taxi company called EnviroCAB, which became the first all-hybrid taxicab fleet in the United States, and the first carbon-negative taxicab company in the world A similar all-hybrid taxicab company, Clean Air Cab, was launched in Phoenix, Arizona in October 2009. In Japan, electric taxicabs are becoming increasingly popular. In 2009, battery-swap company Better Place teamed with the Japanese government to trial a fleet of electric taxis with the Better Place battery-swap system in Yokohama. In 2010, the taxi company Hinomaru Linousine Company launched two Mitsubishi i MiEV electric taxicabs in Tokyo. Both taxicabs had female drivers and were branded under ZeRO TAXI livery. Hybrid taxis are becoming more and more common in Canada, with all new taxis in British Columbia being hybrids, or other fuel efficient vehicles, such as the Toyota Prius or Toyota Corolla. Hybrids such as the Ford Escape Hybrid are slowly being added to the taxicab fleet in Mexico City. Other cities where taxi service is available with hybrid vehicles include Tokyo, London, Sydney, Rome and Singapore. Seoul introduced the first LPI hybrid taxi in December 2009. The internal combustion engine runs on liquefied petroleum gas (LPG) as a fuel. In 2010 Beijing, China introduced electric taxis. A trial run began in March 2010 with taxis being cheaper than their regular gasoline counterparts. International trade association The Taxicab, Limousine & Paratransit Association (TLPA) was established in 1917 in the United States, and is a non-profit trade association of and for the private passenger transportation industry. Today its membership spans the globe and includes 1,100 taxicab companies, executive sedan and limousine services, airport shuttle fleets, non-emergency medical transportation companies, and paratransit services. In April 2011, TLPA announced a nationwide "Transportation on Patrol" initiative. The TOP program gives local police departments the materials they need to train volunteer taxi drivers to be good witnesses and watch out for criminal behavior. Occupational hazards Taxicab drivers are at risk for homicide at a far higher rate than the general working population in the United States (7.4 per 100,000 and 0.37 per 100,000, respectively). In efforts to reduce homicides, bulletproof partitions were introduced in many taxicabs in the 1990s, and in the 21st century, security cameras were added to many taxicabs. Security cameras have been shown to be more effective when implemented by cities and not taxicab companies. Cab drivers also work together to protect one another both from physical threats and passengers who refuse to pay. Also, in some countries, it is reported that taxi drivers involve in more unsafe driving behaviors. Taxi drivers' feelings about their occupation, including traffic chaos, social prestige, economic pressure, and job satisfaction, may impact on the subsequent driving behaviors. Regulation Support of deregulation Supporters of taxicab deregulation may argue that deregulation causes the following benefits: lower prices, because more taxis are competing on the market; lower operating costs, incentivized by the competition; the competition adds quality and the pressure to enhance one's reputation; new innovations such as shared-ride markets and special services for disabled people, new market niches; the demand for taxi services increases, as the prices fall and the quality improves. However, there appears to be a consensus that taxi deregulation has been less impressive than advocates had hoped. Possible reasons include overestimation of what deregulation could deliver and insufficiently thorough deregulation Some also emphasize that the strong cab-driver subculture, itself, ("The Last American Cowboys"), provides its own form of informal regulation. Deregulation advocates may claim that the taxi service level increases most in the poorest sections of the city. The effect is highest in peak hours and bad weather, when the demand is highest. Deregulation advocates also may claim that, in a deregulated environment: black market taxis become legal, possibly eliminating their problems, cities save money, as they do not have to plan and enforce regulation. In nearly all deregulating cities the number of taxis increased, more people were employed as drivers, and deregulation advocates claim needs were better satisfied. Existing taxi companies may try to limit competition by potential new entrants. For example, in New York City the monopoly advantage for taxi license holders was $590 million in the early 1980s. The city has 1,400 fewer licenses than in 1937. Proponents of deregulation argue that the main losers are the car-less poor and disabled people. Taxi owners form a strong lobby network that marginalizes drivers and taxi users. It also pays local government officials to uphold taxi regulation. The regulators usually do not wish to rise against the taxi-owner lobby. The politicians do not want taxi drivers to have a negative opinion of them. Taxi deregulation proponents claims that immigrants and other poor minorities suffer most from taxi regulation, because the work requires relatively little education. Regulation makes entrance to the taxi business particularly difficult for them. People who are elderly, disabled, housewives and poor use taxis more often than others. According to Moore and Rose, it is better to address potential problems of deregulation directly instead of regulating the number of taxi licences. For example, if the regulators want to increase safety, they should make safety statutes or publish a public list of safe taxi operators. Proponents of deregulation also claim that if officials want to regulate prices they should standardize the measures rather than command prices. For example, they may require that any distance tariffs are set for the first 1/5 miles and then for every subsequent 1/3 miles, to make it easier to compare the prices of different taxis. They should not prohibit other pricing than distance pricing. Deregulation advocates claim that regulators only have a very limited information on the market. Black market taxis often have problems with safety, poor customer service, and fares. This situation is made worse because customer who patronize such taxis cannot complain to the police or media. However, proponent of taxi deregulation argue that when these illegal taxis become legalized, their behavior will improve and complaints to officials about these formerly illegal taxis would be allowed. Taxi companies claim that deregulation may lead to an unstable taxi market. However, one pro-deregulation study by Kitch, Isaacson and Kasper claims that the previous argument is a myth because it ignores the U.S. free taxi competition up to 1929. Airport taxis as a special case Some deregulation proponents are less opposed to airport taxi regulation than to regulation of other taxi services. They argue that if an airport regulates prices for taxis in its taxi queues, such regulation has fewer disadvantages than citywide regulation. An airport may determine prices or organize different queues for taxi services of different qualities and prices. It can be argued whether rules set by the owner of an airport are regulation or just a business model. Partial deregulation as a failure Proponents of deregulation argue that partial deregulation is the cause of many cases of deregulation failing to achieve desirable results in United States cities. Many U.S. cities retained regulations on prices and services while allowing for free entrance to taxi business. Deregulation advocates argue that this prevented market mechanisms from solving information problems because new entrants have found it difficult to win new customers using new services or cheap prices. Also, ride-sharing has often been prohibited. Often officials have also prohibited pricing that would have made short rides in sparsely populated areas profitable. Thus drivers have refused to take such customers. Therefore, partial deregulation is not always enough to enhance the situation. One study claims that deregulation was applied to a too small area. In the taxi regulation report by U.S. FTC it was concluded that there are not grounds for limiting the number of taxi companies and cars. These limitations cause a disproportionate burden on low income people. It is better to increase the pay for unprofitable areas than to force the taxis to serve these areas. According to the report, the experience on free entry and price competition are mainly positive: prices have fallen, waiting times were shortened, the market shares of the biggest companies have fallen, and city councils have saved time from licensing and fare setting. However, the airports should either set their own price ceilings or allow for price competition by altering the queue system. Opposition to deregulation Opponents of taxi deregulation argue that deregulation will result in high taxi driver turnover rates which may cause the number of less-qualified taxi drivers to increase, dishonest business practices such as price gouging (especially on airport routes) and circuitous routing, and poor customer service. A Connecticut General Assembly report argues that deregulation fails to cause price decreases because taxi passengers typically do not price comparison shop when searching for taxicabs, and that fares usually increased with deregulation because the higher supply of taxis caused drivers' earning potential to decrease. This report claims that deregulation resulted in dramatically increased taxi supply, especially at already overserved airport locations, fare increases in every city, and an increase in short-trip refusals by taxicab drivers. This report argues that deregulation has led to undesirable results in several American cities. Seattle deregulated taxis in 1980, resulting in a high supply of taxicabs, variable rates, price gouging, short-haul refusals, poor treatment of passengers. As a result, Seattle re-regulated in 1984, reinstating a restriction on taxicab licenses and fare controls. In St. Louis, deregulation produced a 35% rise in taxi fares, and taxicab drivers complained of waiting hours at airports for customers at taxicab stands. Taxicab companies claimed they increased fares in order to make up for lost competition resulting from the increased supply of taxis. As a result, the St. Louis City Council froze new taxicab licenses in 2002. A study of the deregulation of taxis in Sweden in 1991 showed that the taxicab supply increased, but average fares also increased in almost all cases. Specifically, average fares per hour increased for all trips. Average fares also increased for fares calculated by distance (per kilometer) in almost every category studied – for all customer-paid trips in municipalities of all 3 sizes (small, medium, and large) and increased for municipality-paid trips in small and large municipalities; fares only decreased for municipality-paid trips in medium-sized municipalities that were calculated per kilometer. Deregulation also resulted in decreased taxicab productivity and decreased taxi-company revenues. This study concluded that deregulation resulted in increased fares especially in rural areas and the authors argued that the increased fares were due to low taxi company revenues after deregulation. Taxi companies claim that deregulation would cause problems, raise prices and lower service level on certain hours or in certain places. The medallion system has been defended by some experts. They argue that the medallion system is similar to a brand-name capital asset and enforces quality of service because quality service results in higher ridership, thus increasing the value of owning the medallion. They argue that issuing new medallions would decrease the medallion value and thus the incentive for the medallion owner to provide quality service or comply with city regulations. They also argue that the medallion may be preferable to alternate systems of regulation (such as fines, required bonds with seizures of interest payments on those bonds for violations, or licensing of all would-be taxis with revocation of that license for violations) because fines are difficult to collect, license revocation may not be a sufficient deterrent for profitable violations such as price cheating, and because using penalties on bond interest payments give regulators an incentive to impose penalties to collect revenue (rather than for legitimate violations). Medallions do not earn interest and thus inappropriate seizures of interest by regulators is not possible. Results of deregulation in specific localities The results of taxi deregulation in specific cities has varied widely. A study of taxi deregulation in nine United States cities found that the number of taxi firms increased, but large incumbent firms continued to dominate all but one of the nine cities. The taxi prices did not fall in real terms, but increased in every city studied. Turnover was concentrated among small operators (usually one-cab operators); little turnover occurred among medium and large new firms and no exit by a large incumbent firm occurred since deregulation. Productivity decreased by at least one-third in all four cities for which sufficient data was obtainable; the authors argued that decreases of this magnitude in productivity have serious economic consequences for taxi drivers, by shifting the industry from employee drivers to lease drivers and causing the average taxi driver to earn a lower income. Innovation in service did not occur in the deregulated cities because such innovations (especially shared-ride service) were doubted by taxi operators to be justified by demand and because the operators viewed that they would cause a net decrease in revenue. Discounts were offered in certain deregulated cities; however, these discounts were small (10% typically) and were also offered in some regulated cities. The study found a lack of service innovation and little change in level of service despite the increased number of taxicabs. In Japan, taxi deregulation resulted in modest decreases in taxi fares (primarily among long-distance trips); however, Japanese taxi fares are still very high (still the highest in the world). Also, taxi driver incomes decreased, and the earnings of taxi companies also decreased substantially. Deregulation failed to increase taxicab ridership enough to compensate taxi companies for those losses. The burden of deregulation fell disproportionately on taxi drivers because taxi companies increased the number of taxis rented to drivers (to make more money from rental fees), which resulted in stiff competition among drivers, decreasing their earnings. Transportation professor Seiji Abe of Kansai University considered deregulation to be a failure in the Japanese taxi industry (despite what he considers success in other Japanese industries). In the Netherlands, taxi deregulation in 2000 failed to reach policy objectives of strengthening the role of the taxi in the overall Dutch transport system. Instead, the deregulation resulted in unanticipated fare increases (not decreases) in large cities, and bad driver behavior became a serious problem. Local authorities had lost their say in the market due to the deregulation, and thus were unable to correct these problems. In South Africa, taxi deregulation has resulted in the emergence of taxi cartels which carry out acts of gun violence against rival cartels in attempts to monopolize desirable routes. In South Africa, taxis were deregulated in 1987, resulting in fierce competition among new drivers, who then organized into rival cartels in the absence of government regulation, and which used violence and gangland tactics to protect and expand their territories. These "taxi wars" have resulted in between 120 and 330 deaths annually since deregulation. These taxi cartels have engaged in anticompetitive price-fixing. In New Zealand taxi deregulation increased the supply of taxi services and initially decreased the prices remarkably in big cities, whereas the effects in smaller cities were small. In Ireland, taxi deregulation decreased waiting times so much that the liberalization became very popular among the public. The number of companies was increased and the quality of cars and drives did not fall. Some have argued that the regulation should be completely abolished, not just cut down. Minister Alan Kelly held a review of Ireland's taxi industry after Ireland's national broadcaster RTÉ broadcast an investigation into the taxi industry 10 years after de-regulation. In Finland taxi fares rose 13% after 2018 deregulation.
Technology
Road transport
null
43346724
https://en.wikipedia.org/wiki/Demolition
Demolition
Demolition (also known as razing, cartage, and wrecking) is the science and engineering in safely and efficiently tearing down buildings and other artificial structures. Demolition contrasts with deconstruction, which involves taking a building apart while carefully preserving valuable elements for reuse purposes. For small buildings, such as houses, that are only two or three stories high, demolition is a rather simple process. The building is pulled down either manually or mechanically using large hydraulic equipment: elevated work platforms, cranes, excavators or bulldozers. Larger buildings may require the use of a wrecking ball, a heavy weight on a cable that is swung by a crane into the side of the buildings. Wrecking balls are especially effective against masonry, but are less easily controlled and often less efficient than other methods. Newer methods may use rotational hydraulic shears and silenced rockbreakers attached to excavators to cut or break through wood, steel, and concrete. The use of shears is especially common when flame cutting would be dangerous. The tallest planned demolition of a building was the 52-storey 270 Park Avenue in New York City, which was built in 1960 and torn down in 2019–2021 to be replaced by 270 Park Avenue. Manual Before any demolition activities can take place, there are many steps that must be carried out beforehand, including performing asbestos abatement, removing hazardous or regulated materials, obtaining necessary permits, submitting necessary notifications, disconnecting utilities, rodent baiting and the development of site-specific safety and work plans. The typical razing of a building is accomplished as follows: Hydraulic excavators may be used to topple one- or two-story buildings by an undermining process. The strategy is to undermine the building while controlling the manner and direction in which it falls. The demolition project manager/supervisor will determine where undermining is necessary so that a building is pulled in the desired manner and direction. The walls are typically undermined at a building's base, but this is not always the case if the building design dictates otherwise. Safety and cleanup considerations are also taken into account in determining how the building is undermined and ultimately demolished. In some cases a crane with a wrecking ball is used to demolish the structure down to a certain manageable height. At that point undermining takes place as described above. However, crane mounted demolition balls are rarely used within demolition due to the uncontrollable nature of the swinging ball and the safety implications associated. High reach demolition excavators are more often used for tall buildings where explosive demolition is not appropriate or possible. Excavators with shear attachments are typically used to dismantle steel structural elements. Hydraulic hammers are often used for concrete structures and concrete processing attachments are used to crush concrete to a manageable size, and to remove reinforcing steel. For tall concrete buildings, where neither explosive nor high reach demolition with an excavator is safe or practical, the "inside-out" method is used, whereby remotely operated mini-excavators demolish the building from the inside, whilst maintaining the outer walls of the building as a scaffolding, as each floor is demolished. To control dust, fire hoses are used to maintain a wet demolition. Hoses may be held by workers, secured in fixed location, or attached to lifts to gain elevation. Loaders or bulldozers may also be used to demolish a building. They are typically equipped with "rakes" (thick pieces of steel that could be an I-beam or tube) that are used to ram building walls. Skid loaders and loaders will also be used to take materials out and sort steel. The technique of Vérinage is used in France to weaken and buckle the supports of central floors promoting the collapse of the top part of a building onto the bottom resulting in a rapid, symmetrical, collapse. The Japanese company Kajima Construction has developed a new method of demolishing buildings which involves using computer-controlled hydraulic jacks to support the bottom floor as the supporting columns are removed. The floor is lowered and this process is repeated for each floor. This technique is safer and more environmentally friendly, and is useful in areas of high population density. To demolish bridges, hoe rams are typically used to remove the concrete road deck and piers, while hydraulic shears are used to remove the bridge's structural steel. Fred Dibnah used a manual method of demolition to remove industrial chimneys in Great Britain. He cut an ingress at the base of the chimney—supporting the brickwork with wooden props—and then burning away the props so that the chimney fell, using no explosives and usually hand-operated power tools. Building implosion Large buildings, tall chimneys, smokestacks, bridges, and increasingly some smaller structures may be destroyed by building implosion using explosives. Imploding a structure is very fast—the collapse itself only takes seconds—and an expert can ensure that the structure falls into its own footprint, so as not to damage neighboring structures. This is essential for tall structures in dense urban areas. Any error can be disastrous, however, and some demolitions have failed, severely damaging neighboring structures. One significant danger is from flying debris, which, when improperly prepared for, can kill onlookers. Another dangerous scenario is the partial failure of an attempted implosion. When a building fails to collapse completely the structure may be unstable, tilting at a dangerous angle, and filled with un-detonated but still primed explosives, making it difficult for workers to approach safely. A third danger comes from air overpressure that occurs during the implosion. If the sky is clear, the shock wave, a wave of energy and sound, travels upwards and disperses, but if cloud coverage is low, the shock wave can travel outwards, breaking windows or causing other damage to surrounding buildings. Controlled implosion, being spectacular, is the method that the general public often thinks of when discussing demolition; however, it can be dangerous and is only used as a last resort when other methods are impractical or too costly. The destruction of large buildings has become increasingly common as the massive housing projects of the 1960s and 1970s are being leveled around the world. At and , the J. L. Hudson Department Store and Addition is the tallest steel framed building and largest single structure ever imploded. Preparation It takes several weeks or months to prepare a building for implosion. All items of value, such as copper wiring, are stripped from a building. Some materials must be removed, such as glass that can form deadly projectiles, and insulation that can scatter over a wide area. Non-load bearing partitions and drywall are removed. Selected columns on floors where explosives will be set are drilled and high explosives such as nitroglycerin, TNT, RDX, or C4 are placed in the holes. Smaller columns and walls are wrapped in detonating cord. The goal is to use as little explosive as possible so that the structure will fail in a progressive collapse, and therefore only a few floors are rigged with explosives, so that it is safer due to fewer explosives, and costs less. The areas with explosives are covered in thick geotextile fabric and fencing to absorb flying debris. Far more time-consuming than the demolition itself is the clean-up of the site, as the debris is loaded into trucks and hauled away. Deconstruction An alternative approach to demolition is the deconstruction of a building with the goal of minimizing the amount of materials going to landfills. This "green" approach is applied by removing the materials by type material and segregating them for reuse or recycling. With proper planning this approach has resulted in landfill diversion rates that exceed 90% of an entire building and its contents in some cases. It also vastly reduces the CO2 emissions of the removing of a building in comparison to demolition. The development of plant and equipment has allowed for the easier segregation of demolition waste types on site and the reuse within the construction of the replacement building. On site crushers allow the demolished concrete to be reused as type 1 crushed aggregate either as a piling mat for ground stabilization or as aggregate in the mixing of concrete. Timber waste can be shredded using specialist timber shredders and composted, or used to form manufactured timber boards, such as MDF or chipboard. Safety is paramount; a site safety officer is usually assigned to each project to enforce all safety rules and regulations. Teardowns In real estate, a teardown or knockdown - also derisively called "bash and build" - is a term for demolishing a building immediately after purchasing it, freeing up the land for a new, typically larger building. The term first entered the real estate lexicon in the 1990s. A teardown is often done when the redevelopment value of a plot of land exceeds the value of the existing building. In the 2000s, teardowns by wealthy baby boomers replacing houses across America with outsized McMansions became so common that municipal building codes in many areas were revised, putting up more barriers to tearing down an existing homes. Teardowns are often cheaper than restoring an existing, dilapidated building, but can diminish historical value due to the more generic, cookie-cutter appearance of new houses and buildings compared to antique ones. Sometimes, saving the older building can still be viable if the owner spends more money to restore it. Purposely ignoring issues with a building in order to force demolition for public safety reasons is known as "demolition by neglect". "Canyon effect" is a term used to describe when smaller houses are surrounded by new, multi-story buildings with blank walls.
Technology
Disciplines
null
43348949
https://en.wikipedia.org/wiki/Galaxy%20group
Galaxy group
A galaxy group or group of galaxies (GrG) is an aggregation of galaxies comprising about 50 or fewer gravitationally bound members, each at least as luminous as the Milky Way (about 1010 times the luminosity of the Sun); collections of galaxies larger than groups that are first-order clustering are called galaxy clusters. The groups and clusters of galaxies can themselves be clustered, into superclusters of galaxies. The Milky Way galaxy is part of a group of galaxies called the Local Group. Characteristics Groups of galaxies are the smallest aggregates of galaxies. They typically contain no more than 50 galaxies in a diameter of 1 to 2 megaparsecs (Mpc). Their mass is approximately 1013 solar masses. The spread of velocities for the individual galaxies is about 150 km/s. However, this definition should be used as a guide only, as larger and more massive galaxy systems are sometimes classified as galaxy groups. Groups are the most common structures of galaxies in the universe, accounting for at least 50% of the galaxies in the local universe. Groups have a mass range between those of the very large elliptical galaxies and clusters of galaxies. In the local universe, about half of the groups exhibit diffuse X-ray emissions from their intracluster media. Those that emit X-rays appear to have early-type galaxies as members. The diffuse X-ray emissions come from zones within the inner 10–50% of the groups' virial radius, generally 50–500 kpc. Types There are several subtypes of groups. Compact groups A compact group consists of a small number of galaxies, typically around five, in close proximity and relatively isolated from other galaxies and formations. The first compact group to be discovered was Stephan's Quintet, found in 1877. Stephan's Quintet is named for a compact group of four galaxies plus an unassociated foreground galaxy. Astronomer Paul Hickson created a catalogue of such groups in 1982, the Hickson Compact Groups. Compact groups of galaxies readily show the effect of dark matter, as the visible mass is greatly less than that needed to gravitationally hold the galaxies together in a bound group. Compact galaxy groups are also not dynamically stable over Hubble time, thus showing that galaxies evolve by merger, over the timescale of the age of the universe. Fossil groups Fossil galaxy groups, fossil groups, or fossil clusters are believed to be the end-result of galaxy merging within a normal galaxy group, leaving behind the X-ray halo of the progenitor group. Galaxies within a group interact and merge. The physical process behind this galaxy-galaxy merger is dynamical friction. The time-scales for dynamical friction on luminous (or L*) galaxies suggest that fossil groups are old, undisturbed systems that have seen little infall of L* galaxies since their initial collapse. Fossil groups are thus an important laboratory for studying the formation and evolution of galaxies and the intragroup medium in an isolated system. Fossil groups may still contain unmerged dwarf galaxies, but the more massive members of the group have condensed into the central galaxy. This hypothesis is supported by studies of computer simulations of cosmological volumes. The closest fossil group to the Milky Way is NGC 6482, an elliptical galaxy at a distance of approximately 180 million light-years located in the constellation of Hercules. Proto-groups Proto-groups are groups that are in the process of formation. They are the smaller form of protoclusters. These contain galaxies and protogalaxies embedded in dark matter haloes that are in the process of fusing into group-formations of singular dark matter halos. List
Physical sciences
Basics_2
Astronomy
43354034
https://en.wikipedia.org/wiki/Sail
Sail
A sail is a tensile structure, which is made from fabric or other membrane materials, that uses wind power to propel sailing craft, including sailing ships, sailboats, windsurfers, ice boats, and even sail-powered land vehicles. Sails may be made from a combination of woven materials—including canvas or polyester cloth, laminated membranes or bonded filaments, usually in a three- or four-sided shape. A sail provides propulsive force via a combination of lift and drag, depending on its angle of attack, its angle with respect to the apparent wind. Apparent wind is the air velocity experienced on the moving craft and is the combined effect of the true wind velocity with the velocity of the sailing craft. Angle of attack is often constrained by the sailing craft's orientation to the wind or point of sail. On points of sail where it is possible to align the leading edge of the sail with the apparent wind, the sail may act as an airfoil, generating propulsive force as air passes along its surface, just as an airplane wing generates lift, which predominates over aerodynamic drag retarding forward motion. The more that the angle of attack diverges from the apparent wind as a sailing craft turns downwind, the more drag increases and lift decreases as propulsive forces, until a sail going downwind is predominated by drag forces. Sails are unable to generate propulsive force if they are aligned too closely to the wind. Sails may be attached to a mast, boom or other spar or may be attached to a wire that is suspended by a mast. They are typically raised by a line, called a halyard, and their angle with respect to the wind is usually controlled by a line, called a sheet. In use, they may be designed to be curved in both directions along their surface, often as a result of their curved edges. Battens may be used to extend the trailing edge of a sail beyond the line of its attachment points. Other non-rotating airfoils that power sailing craft include wingsails, which are rigid wing-like structures, and kites that power kite-rigged vessels, but do not employ a mast to support the airfoil and are beyond the scope of this article. Rigs Sailing craft employ two types of rig, the square rig and the fore-and-aft rig. The square rig carries the primary driving sails on horizontal spars, which are perpendicular or square, to the keel of the vessel and to the masts. These spars are called yards and their tips, beyond the lifts, are called the yardarms. A ship mainly so rigged is called a square-rigger. A fore-and-aft rig consists of sails that are set along the line of the keel rather than perpendicular to it. Vessels so rigged are described as fore-and-aft rigged. History The invention of the sail was a technological advance of equal or even greater importance than the invention of the wheel. It has been suggested by some that it has the significance of the development of the neolithic lifestyle or the first establishment of cities. Yet it is not known when or where this invention took place. Much of the early development of water transport is believed to have occurred in two main "nursery" areas of the world: Island Southeast Asia and the Mediterranean region. In both of these you have warmer waters, so that use of rafts is possible without the risk of hypothermia (a raft is usually a "flow through" structure) and a number of intervisible islands create both an invitation to travel and an environment where advanced navigation techniques are not needed. Alongside this, the Nile has a northward flowing current with a prevailing wind in the opposite direction, so giving the potential to drift in one direction and sail in the other. Many do not consider sails to have been used before the 5th millennium BCE. Others consider sails to have been invented much earlier. Archaeological studies of the Cucuteni-Trypillian culture ceramics show use of sailing boats from the sixth millennium BCE onwards. Excavations of the Ubaid period (c. 6000–4300 BCE) in Mesopotamia provide direct evidence of sailing boats. Square rigs Sails from ancient Egypt are depicted around 3200 BCE, where reed boats sailed upstream against the River Nile's current. Ancient Sumerians used square rigged sailing boats at about the same time, and it is believed they established sea trading routes as far away as the Indus valley. Greeks and Phoenicians began trading by ship by around 1200 BCE. V-shaped square rigs with two spars that come together at the hull were the ancestral sailing rig of the Austronesian peoples before they developed the fore-and-aft crab claw, tanja and junk rigs. The date of introduction of these later Austronesian sails is disputed. Lateen rigs Lateen sails emerged by around the 2nd century CE in the Mediterranean. They did not become common until the 5th century, when there is evidence that the Mediterranean square sail (which had been in wide use throughout the classical period) was undergoing a simplification of its rigging components. Both the increasing popularity of the lateen and the changes to the contemporary square rig are suggested to be cost saving measures, reducing the number of expensive components needed to fit out a ship. It has been a common and erroneous presumption among maritime historians that lateen had significantly better sailing performance than the square rig of the same period. Analysis of voyages described in contemporary accounts and also in various replica vessels demonstrates that the performance of square rig and lateen were very similar. Lateen provided a cheaper rig to build and maintain, with no degradation of performance. The lateen was adopted by Arab seafarers (usually in the sub-type: the settee sail), but the date is uncertain, with no firm evidence for their use in the Western Indian Ocean before 1500 CE. There is, however, good iconographic evidence of square sails being used by Arab, Persian and Indian ships in this region in, for instance, 1519. The popularity of the caravel in Northern European waters from about 1440 made lateen sails familiar in this part of the world. Additionally, lateen sails were used for the mizzen on early three-masted ships, playing a significant role in the development of the full-rigged ship. It did not, however, provide much of the propulsive force of these vessels – rather serving as a balancing sail that was needed for some manoeuvres in some sea and wind conditions. The extensive amount of contemporary maritime art showing the lateen mizzen on 16th and 17th century ships often has the sail furled. Practical experience on the Duyfken replica confirmed the role of the lateen mizzen. Crab claw rigs Austronesian invention of catamarans, outriggers, and the bi-sparred triangular crab claw sails enabled their ships to sail for vast distances in open ocean. It led to the Austronesian Expansion. From Taiwan, they rapidly settled the islands of Maritime Southeast Asia, then later sailed further onwards to Micronesia, Island Melanesia, Polynesia, and Madagascar, eventually settling a territory spanning half the globe. The proto-Austronesian words for sail, lay(r), and some other rigging parts date to about 3000 BCE when this group began their Pacific expansion. Austronesian rigs are distinctive in that they have spars supporting both the upper and lower edges of the sails (and sometimes in between). The sails were also made from salt-resistant woven leaves, usually from pandan plants. Crab claw sails used with single-outrigger ships in Micronesia, Island Melanesia, Polynesia, and Madagascar were intrinsically unstable when tacking leeward. To deal with this, Austronesians in these regions developed the shunting technique in sailing, in conjunction with uniquely reversible single-outriggers. In the rest of Austronesia, crab claw sails were mainly for double-outrigger (trimarans) and double-hulled (catamarans) boats, which remained stable even leeward. In western Island Southeast Asia, later square sails also evolved from the crab claw sail, the tanja and the junk rig, both of which retained the Austronesian characteristic of having more than one spar supporting the sail. Aerodynamic forces Aerodynamic forces on sails depend on wind speed and direction and the speed and direction of the craft. The direction that the craft is traveling with respect to the true wind (the wind direction and speed over the surface) is called the "point of sail". The speed of the craft at a given point of sail contributes to the apparent wind (VA), the wind speed and direction as measured on the moving craft. The apparent wind on the sail creates a total aerodynamic force, which may be resolved into drag, the force component in the direction of the apparent wind and lift, the force component normal (90°) to the apparent wind. Depending on the alignment of the sail with the apparent wind, lift or drag may be the predominant propulsive component. Total aerodynamic force also resolves into a forward, propulsive, driving force, resisted by the medium through or over which the craft is passing (e.g., through water, air, or over ice, sand) and a lateral force, resisted by the underwater foils, ice runners, or wheels of the sailing craft. For apparent wind angles aligned with the entry point of the sail, the sail acts as an airfoil and lift is the predominant component of propulsion. For apparent wind angles behind the sail, lift diminishes and drag increases as the predominant component of propulsion. For a given true wind velocity over the surface, a sail can propel a craft to a higher speed, on points of sail when the entry point of the sail is aligned with the apparent wind, than it can with the entry point not aligned, because of a combination of the diminished force from airflow around the sail and the diminished apparent wind from the velocity of the craft. Because of limitations on speed through the water, displacement sailboats generally derive power from sails generating lift on points of sail that include close-hauled through broad reach (approximately 40° to 135° off the wind). Because of low friction over the surface and high speeds over the ice that create high apparent wind speeds for most points of sail, iceboats can derive power from lift further off the wind than displacement boats. Types Each rig is configured in a sail plan, appropriate to the size of the sailing craft. A sail plan is a set of drawings, usually prepared by a naval architect which shows the various combinations of sail proposed for a sailing ship. Sail plans may vary for different wind conditions—light to heavy. Both square-rigged and fore-and-aft rigged vessels have been built with a wide range of configurations for single and multiple masts with sails and with a variety of means of primary attachment to the craft, including: Jibs, which are usually attached to forestays, and staysails, which are mounted on other stays (typically wire cable) that support other masts from the bow aft. Gaff-rigged quadrilateral and Bermuda triangular sails, fore-and-aft sails directly attached to the mast at the luff. Square sails and such fore-and-aft quadrilateral sails as lug rigs, junk and spritsails and such triangular sails as the lateen, and the crab claw have their primary attachment to the vessel via a spar. Symmetrical spinnakers' primary attachment to a vessel is by a halyard. High-performance yachts, including the International C-Class Catamaran, have used or use rigid wing sails, which perform better than traditional soft sails but are more difficult to manage. A rigid wing sail was used by Stars and Stripes, the defender which won the 1988 America's Cup, and by USA-17, the challenger which won the 2010 America's Cup. USA 17'''s performance during the 2010 America's Cup races demonstrated a velocity made good upwind of over twice the wind speed and downwind of over 2.5 times the wind speed and the ability to sail as close as 20 degrees off the apparent wind. Shape The shape of a sail is defined by its edges and corners in the plane of the sail, laid out on a flat surface. The edges may be curved, either to extend the sail's shape as an airfoil or to define its shape in use. In use, the sail becomes a curved shape, adding the dimension of depth or draft.Edges – The top of all sails is called the head, the leading edge is called the luff on fore-and-aft sails and on windward leech symmetrical sails, the trailing edge is the leech, and the bottom edge is the foot. The head is attached at the throat and peak to a gaff, yard, or sprit. For a triangular sail the head refers to the topmost corner. A fore-and-aft triangular mainsail achieves a better approximation of a wing form by extending the leech aft, beyond the line between the head and clew on an arc called the roach, rather than having a triangular shape. This added area would flutter in the wind and not contribute to the efficient airfoil shape of the sail without the presence of battens. Offshore cruising mainsails sometimes have a hollow leech (the inverse of a roach) to obviate the need for battens and their ensuing likelihood of chafing the sail. The roach on a square sail design is the arc of a circle above a straight line from clew to clew at the foot of a square sail, which allows the foot of the sail to clear stays coming up the mast, as the sails are rotated from side to side.Corners – The names of corners of sails vary, depending on shape and symmetry. In a triangular sail, the corner where the luff and the leech connect is called the head. On a square sail, the top corners are head cringles, where there are grommets, called cringles. On a quadrilateral sail, the peak is the upper aft corner of the sail, at the top end of a gaff or other spar. The throat is the upper forward corner of the sail, at the bottom end of a gaff or other spar. Gaff-rigged sails, and certain similar rigs, employ two halyards to raise the sails: the throat halyard raises the forward, throat end of the gaff, while the peak halyard raises the aft, peak end. The corner where the leech and foot connect is called the clew on a fore-and-aft sail. On a jib, the sheet is connected to the clew; on a mainsail, the sheet is connected to the boom (if present) near the clew. Clews are the lower two corners of a square sail. Square sails have sheets attached to their clews like triangular sails, but the sheets are used to pull the sail down to the yard below rather than to adjust the angle it makes with the wind. The corner where the leech and the foot connect is called the clew. The corner on a fore-and-aft sail where the luff and foot connect is called the tack and, on a mainsail, is located where the boom and mast connect. In the case of a symmetrical spinnaker, each of the lower corners of the sail is a clew. However, under sail on a given tack, the corner to which the spinnaker sheet is attached is called the clew, and the corner attached to the spinnaker pole is referred to as the tack. On a square sail underway, the tack is the windward clew and also the line holding down that corner.Draft – Those triangular sails that are attached to both a mast along the luff and a boom along the foot have depth, called draft, which results from the luff and foot being curved, rather than straight as they are attached to those spars. Draft creates a more efficient airfoil shape for the sail. Draft can also be induced in triangular staysails by adjustment of the sheets and the angle from which they reach the sails. Material Sail characteristics derive, in part, from the design, construction and the attributes of the fibers, which are woven together to make the sail cloth. There are several key factors in evaluating a fiber for suitability in weaving a sail-cloth: initial modulus, breaking strength (tenacity), creep, and flex strength. Both the initial cost and its durability of the material define its cost-effectiveness over time. Traditionally, sails were made from flax or cotton canvas, although Scandinavian, Scottish and Icelandic cultures used woolen sails from the 11th into the 19th centuries. Materials used in sails, as of the 21st century, include nylon for spinnakers, where light weight and elastic resistance to shock load are valued and a range of fibers, used for triangular sails, that includes Dacron, aramid fibers including Kevlar, and other liquid crystal polymer fibers including Vectran. Woven materials, like Dacron, may specified as either high or low tenacity, as indicated, in part by their denier count (a unit of measure for the linear mass density of fibers). ConstructionCross-cut sails have the panels sewn parallel to one another, often parallel to the foot of the sail, and are the least expensive of the two sail constructions. Triangular cross-cut sail panels are designed to meet the mast and stay at an angle from either the warp or the weft (on the bias) to allow stretching along the luff, but minimize stretching on the luff and foot, where the fibers are aligned with the edges of the sail.Radial sails have panels that "radiate" from corners in order to efficiently transmit stress and are typically of higher performance than cross-cut sails. A bi-radial sail has panels radiating from two of three corners; a tri-radial sail has panels radiating from all three corners. Mainsails are more likely to be bi-radial, since there is very little stress at the tack, whereas head sails (spinnakers and jibs) are more likely to be tri-radial, because they are tensioned at their corners. Higher performance sails may be laminated, constructed directly from multiple plies of filaments, fibers, taffetas, and films, instead of woven textiles that are adhered together. Molded sails are laminated sails formed over a curved mold and adhered together into a shape that does not lie flat. Conventional sail panels are sewn together. Sails are tensile structures, so the role of a seam is to transmit a tensile load from panel to panel. For a sewn textile sail this is done through thread and is limited by the strength of the thread and the strength of the hole in the textile through which it passes. Sail seams are often overlapped between panels and sewn with zig-zag stitches that create many connections per unit of seam length. Whereas textiles are typically sewn together, other sail materials may be ultrasonically welded, a technique whereby high frequency ultrasonic acoustic vibrations are locally applied to workpieces being held together under pressure to create a solid state weld. It is commonly used for plastics, and especially for joining dissimilar materials. Sails feature reinforcements of fabric layers where lines attach at grommets or cringles. A bolt rope may be sewn onto the edges of a sail to reinforce it, or to fix the sail into a groove in the boom, in the mast, or in the luff foil of a roller-furling jib. They may have stiffening features, called battens, that help shape the sail, when full length, or just the roach, when present. They may have a variety of means of reefing them (reducing sail area), including rows of short lines affixed to the sail to wrap up unused sail, as on square and gaff rigs, or simply grommets through which a line or a hook may pass, as on Bermuda mainsails. Fore-and-aft sails may have tell-tales—pieces of yarn, thread or tape that are affixed to sails—to help visualize airflow over their surfaces. Running rigging The lines that attach to and control sails are part of the running rigging and differ between square and fore-and-aft rigs. Some rigs shift from one side of the mast to the other, e.g. the dipping lug sail and the lateen. The lines can be categorized as those that support the sail, those that shape it, and those that control its angle to the wind. Fore-and-aft rigged vessels Fore-and-aft rigged vessels have rigging that supports, shapes, and adjusts the sails to optimize their performance in the wind, which include the following lines:Supporting – Halyards raise sails and control luff tension. Topping lifts hold booms and yards aloft. On a gaff sail, brails run from the leech to the spar to facilitate furling.Shaping – Barber haulers adjust a spinnaker/jib sheeting angle inboard at right angles to the sheet with a ring or clip on the sheet attached to cordage which is secured and adjusted via fairlead and cam cleat. Kicking straps/boom vangs control a boom-footed sail's leech tension by exerting downward force mid-boom. Cunninghams tighten the luff of a boom-footed sail by pulling downward on a cringle in the luff of a mainsail above the tack. Downhauls lower a sail or a yard and can adjust the tension on the luff of a sail. Outhauls control the foot tension of a boom-footed sail. Adjusting angle to the wind – Sheets control angle of attack with respect to the apparent wind, the amount of leech "twist" near the head of the sail, and the foot tension of loose-footed sails. A preventer attaches to the end of the boom from a point near the mast to prevent an accidental gybe. Guys control spinnaker pole angle with respect to the apparent wind. Square-rigged vessels Square-rigged vessels require more controlling lines than fore-and-aft rigged ones, including the following.Supporting – Halyards raise and lower the yards. Brails run from the leech to the spar to facilitate furling. Buntlines serve to raise the foot up for shortening sail or for furling. Lifts adjust the tilt of a yard, to raise or lower the ends off the horizontal. Leechlines run to the leech (outer vertical edges) of a sail and serve to pull the leech both in and up when furling.Shaping – Bowlines run from the leech forward towards the bow to control the weather leech, keeping it taut and thus preventing it from curling back on itself. Clewlines raise the clews to the yard above.Adjusting angle to the wind – Braces adjust the fore and aft angle of a yard (i.e. to rotate the yard laterally, fore and aft, around the mast). Sheets attach to the clew to control the sail's angle to the wind. Tacks'' haul the clew of a square sail forward. Gallery
Technology
Maritime transport
null
33607453
https://en.wikipedia.org/wiki/Spinal%20column
Spinal column
The spinal column, also known as the vertebral column, spine or backbone, is the core part of the axial skeleton in vertebrates. The vertebral column is the defining and eponymous characteristic of the vertebrate. The spinal column is a segmented column of vertebrae that surrounds and protects the spinal cord. The vertebrae are separated by intervertebral discs in a series of cartilaginous joints. The dorsal portion of the spinal column houses the spinal canal, an elongated cavity formed by the alignment of the vertebral neural arches that encloses and protects the spinal cord, with spinal nerves exiting via the intervertebral foramina to innervate each body segment. There are around 50,000 species of animals that have a vertebral column. The human spine is one of the most-studied examples, as the general structure of human vertebrae is fairly typical of that found in other mammals, reptiles, and birds. The shape of the vertebral body does, however, vary somewhat between different groups of living species. Individual vertebrae are named according to their corresponding region including the neck, thorax, abdomen, pelvis or tail. In clinical medicine, features on vertebrae such as the spinous process can be used as surface landmarks to guide medical procedures such as lumbar punctures and spinal anesthesia. There are also many different spinal diseases in humans that can affect both the bony vertebrae and the intervertebral discs, with kyphosis, scoliosis, ankylosing spondylitis, and degenerative discs being recognizable examples. Spina bifida is the main birth defect. Structure The number of vertebrae in a region can vary but overall the number remains the same. In a human spinal column, there are normally 33 vertebrae. The upper 24 pre-sacral vertebrae are articulating and separated from each other by intervertebral discs, and the lower nine are fused in adults, five in the sacrum and four in the coccyx, or tailbone. The articulating vertebrae are named according to their region of the spine. From top to bottom, there are 7 cervical vertebrae, 12 thoracic vertebrae and 5 lumbar vertebrae. The number of those in the cervical region, however, is only rarely changed, while that in the coccygeal region varies most. Excluding rare deviations, the total number of vertebrae ranges from 32 to 35. In about 10% of people, both the total number of pre-sacral vertebrae and the number of vertebrae in individual parts of the spine can vary. The most frequent deviations are: 11 (rarely 13) thoracic vertebrae, 4 or 6 lumbar vertebrae, 3 or 5 coccygeal vertebrae (rarely up to 7). There are numerous ligaments extending the length of the column, which include the anterior and posterior longitudinal ligaments at the front and back of the vertebral bodies, the ligamentum flavum deep to the laminae, the interspinous and supraspinous ligaments between spinous processes, and the intertransverse ligaments between the transverse processes. Vertebrae The vertebrae in the human vertebral column is divided into different body regions, which correspond to the curvatures of the vertebral column. The articulating vertebrae are named according to their region of the spine. Vertebrae in these regions are essentially alike, with minor variation. These regions are called the cervical spine, thoracic spine, lumbar spine, sacrum, and coccyx. There are seven cervical vertebrae, twelve thoracic vertebrae, and five lumbar vertebrae. The number of vertebrae in a region can vary but overall the number remains the same. The number of those in the cervical region, however, is only rarely changed. The vertebrae of the cervical, thoracic, and lumbar spines are independent bones and generally quite similar. The vertebrae of the sacrum and coccyx are usually fused and unable to move independently. Two special vertebrae are the atlas and axis, on which the head rests. A typical vertebra consists of two parts: the vertebral body (or centrum), which is ventral (or anterior, in the standard anatomical position) and withstands axial structural load; and the vertebral arch (also known as neural arch), which is dorsal (or posterior) and provides articulations and anchorages for ribs and core skeletal muscles. Together, these enclose the vertebral foramen, the series of which align to form the spinal canal, a body cavity that contains the spinal cord. Because the vertebral column will outgrow the spinal cord during child development, by adulthood the spinal cord often ends at the upper lumbar spine (at around L1/L2 level), the lower (caudal) end of the spinal canal is occupied by a ponytail-like bundle of spinal nerves descriptively called cauda equina (), and the sacrum and coccyx are fused without a central foramen. The vertebral arch is formed by a ventral pair of pedicles and a dorsal pair of laminae, and supports seven processes, four articular, two transverse and one spinous, the latter also being known as the neural spine. The transverse and spinous processes and their associated ligaments serve as important attachment sites for back and paraspinal muscles and the thoracolumbar fasciae. The spinous processes of the cervical and lumbar regions can be felt through the skin, and are important surface landmarks in clinical medicine. The four articular processes for two pairs of plane facet joints above and below each vertebra, articulating with those of the adjacent vertebrae and are joined by a thin portion of the neural arch called the pars interarticularis. The orientation of the facet joints restricts the range of motion between the vertebrae. Underneath each pedicle is a small hole (enclosed by the pedicle of the vertebral below) called intervertebral foramen, which transmit the corresponding spinal nerve and dorsal root ganglion that exit the spinal canal. From top to bottom, the vertebrae are: Cervical spine (neck): 7 vertebrae (C1–C7) Thoracic spine (chest/upper back): 12 vertebrae (T1–T12) Lumbar spine (lower back): 5 vertebrae (L1–L5) Sacrum (pelvis region): 5 (fused) vertebrae (S1–S5) Coccyx (tailbone): 4 (3–5, fused) vertebrae Combined vertebral regions For some medical purposes, adjacent vertebral regions may be considered together: Cervicothoracic spine (or region or division): the combined region of the cervical vertebrae and the thoracic vertebrae Thoracolumbar spine (or region or division): the combined region of the thoracic vertebrae and the lumbar vertebrae Lumbosacral spine (or region or division): the combined region of the lumbar vertebrae and the sacral vertebrae Shape The vertebral column is curved in several places, a result of human bipedal evolution. These curves increase the vertebral column's strength, flexibility, and ability to absorb shock, stabilising the body in upright position. When the load on the spine is increased, the curvatures increase in depth (become more curved) to accommodate the extra weight. They then spring back when the weight is removed. The upper cervical spine has a curve, convex forward, that begins at the axis (second cervical vertebra) at the apex of the odontoid process or dens and ends at the middle of the second thoracic vertebra; it is the least marked of all the curves. This inward curve is known as a lordotic curve. The thoracic curve, concave forward, begins at the middle of the second and ends at the middle of the twelfth thoracic vertebra. Its most prominent point behind corresponds to the spinous process of the seventh thoracic vertebra. This curve is known as a kyphotic curve. The lumbar curve is more marked in the female than in the male; it begins at the middle of the last thoracic vertebra, and ends at the sacrovertebral angle. It is convex anteriorly, the convexity of the lower three vertebrae being much greater than that of the upper two. This curve is described as a lordotic curve. The sacral curve begins at the sacrovertebral articulation, and ends at the point of the coccyx; its concavity is directed downward and forward as a kyphotic curve. The thoracic and sacral kyphotic curves are termed primary curves, because they are present in the fetus. The cervical and lumbar curves are compensatory, or secondary, and are developed after birth. The cervical curve forms when the infant is able to hold up its head (at three or four months) and sit upright (at nine months). The lumbar curve forms later from twelve to eighteen months, when the child begins to walk. Surfaces Anterior surface When viewed from in front, the width of the bodies of the vertebrae is seen to increase from the second cervical to the first thoracic; there is then a slight diminution in the next three vertebrae. Below this, there is again a gradual and progressive increase in width as low as the sacrovertebral angle. From this point there is a rapid diminution, to the apex of the coccyx. Posterior surface From behind, the vertebral column presents in the median line the spinous processes. In the cervical region (with the exception of the second and seventh vertebrae), these are short, horizontal, and bifid. In the upper part of the thoracic region they are directed obliquely downward; in the middle they are almost vertical, and in the lower part they are nearly horizontal. In the lumbar region they are nearly horizontal. The spinous processes are separated by considerable intervals in the lumbar region, by narrower intervals in the neck, and are closely approximated in the middle of the thoracic region. Occasionally one of these processes deviates a little from the median line — which can sometimes be indicative of a fracture or a displacement of the spine. On either side of the spinous processes is the vertebral groove formed by the laminae in the cervical and lumbar regions, where it is shallow, and by the laminae and transverse processes in the thoracic region, where it is deep and broad; these grooves lodge the deep muscles of the back. Lateral to the spinous processes are the articular processes, and still more laterally the transverse processes. In the thoracic region, the transverse processes stand backward, on a plane considerably behind that of the same processes in the cervical and lumbar regions. In the cervical region, the transverse processes are placed in front of the articular processes, lateral to the pedicles and between the intervertebral foramina. In the thoracic region they are posterior to the pedicles, intervertebral foramina, and articular processes. In the lumbar region they are in front of the articular processes, but behind the intervertebral foramina. Lateral surfaces The sides of the vertebral column are separated from the posterior surface by the articular processes in the cervical and thoracic regions and by the transverse processes in the lumbar region. In the thoracic region, the sides of the bodies of the vertebrae are marked in the back by the facets for articulation with the heads of the ribs. More posteriorly are the intervertebral foramina, formed by the juxtaposition of the vertebral notches, oval in shape, smallest in the cervical and upper part of the thoracic regions and gradually increasing in size to the last lumbar. They transmit the special spinal nerves and are situated between the transverse processes in the cervical region and in front of them, in the thoracic and lumbar regions. Ligaments There are different ligaments involved in the holding together of the vertebrae in the column, and in the column's movement. The anterior and posterior longitudinal ligaments extend the length of the vertebral column along the front and back of the vertebral bodies. The interspinous ligaments connect the adjoining spinous processes of the vertebrae. The supraspinous ligament extends the length of the spine running along the back of the spinous processes, from the sacrum to the seventh cervical vertebra. From there it is continuous with the nuchal ligament. Development The striking segmented pattern of the spine is established during embryogenesis when somites are rhythmically added to the posterior of the embryo. Somite formation begins around the third week when the embryo begins gastrulation and continues until all somites are formed. Their number varies between species: there are 42 to 44 somites in the human embryo and around 52 in the chick embryo. The somites are spheres, formed from the paraxial mesoderm that lies at the sides of the neural tube and they contain the precursors of spinal bone, the vertebrae ribs and some of the skull, as well as muscle, ligaments and skin. Somitogenesis and the subsequent distribution of somites is controlled by a clock and wavefront model acting in cells of the paraxial mesoderm. Soon after their formation, sclerotomes, which give rise to some of the bone of the skull, the vertebrae and ribs, migrate, leaving the remainder of the somite now termed a dermamyotome behind. This then splits to give the myotomes which will form the muscles and dermatomes which will form the skin of the back. Sclerotomes become subdivided into an anterior and a posterior compartment. This subdivision plays a key role in the definitive patterning of vertebrae that form when the posterior part of one somite fuses to the anterior part of the consecutive somite during a process termed resegmentation. Disruption of the somitogenesis process in humans results in diseases such as congenital scoliosis. So far, the human homologues of three genes associated to the mouse segmentation clock, (MESP2, DLL3 and LFNG), have been shown to be mutated in cases of congenital scoliosis, suggesting that the mechanisms involved in vertebral segmentation are conserved across vertebrates. In humans the first four somites are incorporated in the base of the occipital bone of the skull and the next 33 somites will form the vertebrae, ribs, muscles, ligaments and skin. The remaining posterior somites degenerate. During the fourth week of embryogenesis, the sclerotomes shift their position to surround the spinal cord and the notochord. This column of tissue has a segmented appearance, with alternating areas of dense and less dense areas. As the sclerotome develops, it condenses further eventually developing into the vertebral body. Development of the appropriate shapes of the vertebral bodies is regulated by HOX genes. The less dense tissue that separates the sclerotome segments develop into the intervertebral discs. The notochord disappears in the sclerotome (vertebral body) segments but persists in the region of the intervertebral discs as the nucleus pulposus. The nucleus pulposus and the fibers of the anulus fibrosus make up the intervertebral disc. The primary curves (thoracic and sacral curvatures) form during fetal development. The secondary curves develop after birth. The cervical curvature forms as a result of lifting the head and the lumbar curvature forms as a result of walking. Function Spinal cord The vertebral column surrounds the spinal cord which travels within the spinal canal, formed from a central hole within each vertebra. The spinal cord is part of the central nervous system that supplies nerves and receives information from the peripheral nervous system within the body. The spinal cord consists of grey and white matter and a central cavity, the central canal. Adjacent to each vertebra emerge spinal nerves. The spinal nerves provide sympathetic nervous supply to the body, with nerves emerging forming the sympathetic trunk and the splanchnic nerves. The spinal canal follows the different curves of the column; it is large and triangular in those parts of the column that enjoy the greatest freedom of movement, such as the cervical and lumbar regions, and is small and rounded in the thoracic region, where motion is more limited. The spinal cord terminates in the conus medullaris and cauda equina. Clinical significance Disease Spina bifida is a congenital disorder in which there is a defective closure of the vertebral arch. Sometimes the spinal meninges and also the spinal cord can protrude through this, and this is called spina bifida cystica. Where the condition does not involve this protrusion it is known as spina bifida occulta. Sometimes all of the vertebral arches may remain incomplete. Another, though rare, congenital disease is Klippel–Feil syndrome, which is the fusion of any two of the cervical vertebrae. Spondylolisthesis is the forward displacement of a vertebra and retrolisthesis is a posterior displacement of one vertebral body with respect to the adjacent vertebra to a degree less than a dislocation. Spondylolysis, also known as a pars defect, is a defect or fracture at the pars interarticularis of the vertebral arch. Spinal disc herniation, more commonly called a "slipped disc", is the result of a tear in the outer ring (anulus fibrosus) of the intervertebral disc, which lets some of the soft gel-like material, the nucleus pulposus, bulge out in a hernia. Spinal stenosis is a narrowing of the spinal canal which can occur in any region of the spine though less commonly in the thoracic region. The stenosis can constrict the spinal canal giving rise to a neurological deficit. Pain at the coccyx (tailbone) is known as coccydynia. Spinal cord injury is damage to the spinal cord that causes changes in its function, either temporary or permanent. Spinal cord injuries can be divided into categories: complete transection, hemisection, central spinal cord lesions, posterior spinal cord lesions, and anterior spinal cord lesions. Scalloping vertebrae is the increase in the concavity of the posterior vertebral body. It can be seen on lateral X-ray and sagittal views of CT and MRI scans. Its concavity is due to the increased pressure exerting on the vertebrae due to a mass. Internal spinal mass such as spinal astrocytoma, ependymoma, schwannoma, neurofibroma, and achondroplasia causes vertebrae scalloping. Curvature Excessive or abnormal spinal curvature is classed as a spinal disease or dorsopathy and includes the following abnormal curvatures: Kyphosis is an exaggerated kyphotic (convex) curvature of the thoracic region in the sagittal plane, also called hyperkyphosis. This produces the so-called "humpback" or "dowager's hump", a condition commonly resulting from osteoporosis. Lordosis is an exaggerated lordotic (concave) curvature of the lumbar region in the sagittal plane, is known as lumbar hyperlordosis and also as "swayback". Temporary lordosis is common during pregnancy. Scoliosis, lateral curvature, is the most common abnormal curvature, occurring in 0.5% of the population. It is more common among females and may result from unequal growth of the two sides of one or more vertebrae, so that they do not fuse properly. It can also be caused by pulmonary atelectasis (partial or complete deflation of one or more lobes of the lungs) as observed in asthma or pneumothorax. Kyphoscoliosis, a combination of kyphosis and scoliosis. Anatomical landmarks Individual vertebrae of the human vertebral column can be felt and used as surface anatomy, with reference points are taken from the middle of the vertebral body. This provides anatomical landmarks that can be used to guide procedures such as a lumbar puncture and also as vertical reference points to describe the locations of other parts of human anatomy, such as the positions of organs. Other animals Variations in vertebrae The general structure of vertebrae in other animals is largely the same as in humans. Individual vertebrae are composed of a centrum (body), arches protruding from the top and bottom of the centrum, and various processes projecting from the centrum and/or arches. An arch extending from the top of the centrum is called a neural arch, while the haemal arch is found underneath the centrum in the caudal (tail) vertebrae of fish, most reptiles, some birds, some dinosaurs and some mammals with long tails. The vertebral processes can either give the structure rigidity, help them articulate with ribs, or serve as muscle attachment points. Common types are transverse process, diapophyses, parapophyses, and zygapophyses (both the cranial zygapophyses and the caudal zygapophyses). The centrum of the vertebra can be classified based on the fusion of its elements. In temnospondyls, bones such as the spinous process, the pleurocentrum and the intercentrum are separate ossifications. Fused elements, however, classify a vertebra as having holospondyly. A vertebra can also be described in terms of the shape of the ends of the centrum. Centra with flat ends are acoelous, like those in mammals. These flat ends of the centra are especially good at supporting and distributing compressive forces. Amphicoelous vertebra have centra with both ends concave. This shape is common in fish, where most motion is limited. Amphicoelous centra often are integrated with a full notochord. Procoelous vertebrae are anteriorly concave and posteriorly convex. They are found in frogs and modern reptiles. Opisthocoelous vertebrae are the opposite, possessing anterior convexity and posterior concavity. They are found in salamanders, and in some non-avian dinosaurs. Heterocoelous vertebrae have saddle-shaped articular surfaces. This type of configuration is seen in turtles that retract their necks, and birds, because it permits extensive lateral and vertical flexion motion without stretching the nerve cord too extensively or wringing it about its long axis. In horses, the Arabian (breed) can have one less vertebrae and pair of ribs. This anomaly disappears in foals that are the product of an Arabian and another breed of horse. Regional vertebrae Vertebrae are defined by their location in the vertebral column. Cervical vertebrae are those in the neck area. With the exception of the two sloth genera (Choloepus and Bradypus) and the manatee genus, (Trichechus), all mammals have seven cervical vertebrae. In other vertebrates, the number of cervical vertebrae can range from a single vertebra in amphibians to as many as 25 in swans or 76 in the extinct plesiosaur Elasmosaurus. The dorsal vertebrae range from the bottom of the neck to the top of the pelvis. Dorsal vertebrae attached to the ribs are called thoracic vertebrae, while those without ribs are called lumbar vertebrae. The sacral vertebrae are those in the pelvic region, and range from one in amphibians, to two in most birds and modern reptiles, or up to three to five in mammals. When multiple sacral vertebrae are fused into a single structure, it is called the sacrum. The synsacrum is a similar fused structure found in birds that is composed of the sacral, lumbar, and some of the thoracic and caudal vertebra, as well as the pelvic girdle. Caudal vertebrae compose the tail, and the final few can be fused into the pygostyle in birds, or into the coccygeal or tail bone in chimpanzees (and humans). Fish and amphibians The vertebrae of lobe-finned fishes consist of three discrete bony elements. The vertebral arch surrounds the spinal cord, and is of broadly similar form to that found in most other vertebrates. Just beneath the arch lies a small plate-like pleurocentrum, which protects the upper surface of the notochord, and below that, a larger arch-shaped intercentrum to protect the lower border. Both of these structures are embedded within a single cylindrical mass of cartilage. A similar arrangement was found in the primitive Labyrinthodonts, but in the evolutionary line that led to reptiles (and hence, also to mammals and birds), the intercentrum became partially or wholly replaced by an enlarged pleurocentrum, which in turn became the bony vertebral body. In most ray-finned fishes, including all teleosts, these two structures are fused with, and embedded within, a solid piece of bone superficially resembling the vertebral body of mammals. In living amphibians, there is simply a cylindrical piece of bone below the vertebral arch, with no trace of the separate elements present in the early tetrapods. In cartilaginous fish, such as sharks, the vertebrae consist of two cartilaginous tubes. The upper tube is formed from the vertebral arches, but also includes additional cartilaginous structures filling in the gaps between the vertebrae, and so enclosing the spinal cord in an essentially continuous sheath. The lower tube surrounds the notochord, and has a complex structure, often including multiple layers of calcification. Lampreys have vertebral arches, but nothing resembling the vertebral bodies found in all higher vertebrates. Even the arches are discontinuous, consisting of separate pieces of arch-shaped cartilage around the spinal cord in most parts of the body, changing to long strips of cartilage above and below in the tail region. Hagfishes lack a true vertebral column, and are therefore not properly considered vertebrates, but a few tiny neural arches are present in the tail. Other vertebrates The general structure of human vertebrae is fairly typical of that found in other mammals, reptiles, and birds (amniotes). The shape of the vertebral body does, however, vary somewhat between different groups. In humans and other mammals, it typically has flat upper and lower surfaces, while in reptiles the anterior surface commonly has a concave socket into which the expanded convex face of the next vertebral body fits. Even these patterns are only generalisations, however, and there may be variation in form of the vertebrae along the length of the spine even within a single species. Some unusual variations include the saddle-shaped sockets between the cervical vertebrae of birds and the presence of a narrow hollow canal running down the centre of the vertebral bodies of geckos and tuataras, containing a remnant of the notochord. Reptiles often retain the primitive intercentra, which are present as small crescent-shaped bony elements lying between the bodies of adjacent vertebrae; similar structures are often found in the caudal vertebrae of mammals. In the tail, these are attached to chevron-shaped bones called haemal arches, which attach below the base of the spine, and help to support the musculature. These latter bones are probably homologous with the ventral ribs of fish. The number of vertebrae in the spines of reptiles is highly variable, and may be several hundred in some species of snake. In birds, there is a variable number of cervical vertebrae, which often form the only truly flexible part of the spine. The thoracic vertebrae are partially fused, providing a solid brace for the wings during flight. The sacral vertebrae are fused with the lumbar vertebrae, and some thoracic and caudal vertebrae, to form a single structure, the synsacrum, which is thus of greater relative length than the sacrum of mammals. In living birds, the remaining caudal vertebrae are fused into a further bone, the pygostyle, for attachment of the tail feathers. Aside from the tail, the number of vertebrae in mammals is generally fairly constant. There are almost always seven cervical vertebrae (sloths and manatees are among the few exceptions), followed by around twenty or so further vertebrae, divided between the thoracic and lumbar forms, depending on the number of ribs. There are generally three to five vertebrae with the sacrum, and anything up to fifty caudal vertebrae. Dinosaurs The vertebral column in dinosaurs consists of the cervical (neck), dorsal (back), sacral (hips), and caudal (tail) vertebrae. Saurischian dinosaur vertebrae sometimes possess features known as pleurocoels, which are hollow depressions on the lateral portions of the vertebrae, perforated to create an entrance into the air chambers within the vertebrae, which served to decrease the weight of these bones without sacrificing strength. These pleurocoels were filled with air sacs, which would have further decreased weight. In sauropod dinosaurs, the largest known land vertebrates, pleurocoels and air sacs may have reduced the animal's weight by over a ton in some instances, a handy evolutionary adaption in animals that grew to over 30 metres in length. In many hadrosaur and theropod dinosaurs, the caudal vertebrae were reinforced by ossified tendons. The presence of three or more sacral vertebrae, in association with the hip bones, is one of the defining characteristics of dinosaurs. The occipital condyle is a structure on the posterior part of a dinosaur's skull that articulates with the first cervical vertebra.
Biology and health sciences
Skeletal system
null
33610705
https://en.wikipedia.org/wiki/Tree%20kingfisher
Tree kingfisher
The tree kingfishers, also called wood kingfishers or Halcyoninae, are the most numerous of the three subfamilies of birds in the kingfisher family, with around 70 species divided into 12 genera, including several species of kookaburras. The subfamily appears to have arisen in Indochina and Maritime Southeast Asia and then spread to many areas around the world. Tree kingfishers are widespread through Asia and Australasia, but also appear in Africa and the islands of the Pacific and Indian Oceans, using a range of habitats from tropical rainforest to open woodlands. The tree kingfishers are short-tailed, large-headed, compact birds with long, pointed bills. Like other Coraciiformes, they are brightly coloured. Most are monogamous and territorial, nesting in holes in trees or termite nests. Both parents incubate the eggs and feed the chicks. Although some tree kingfishers frequent wetlands, none are specialist fish-eaters. Most species dive onto prey from a perch, mainly taking slow-moving invertebrates or small vertebrates. Taxonomy The tree kingfisher subfamily is often given the name Daceloninae introduce by Charles Lucien Bonaparte in 1841, but the name Halcyoninae introduced by Nicholas Aylward Vigors in 1825 is earlier and has priority. The subfamily Halcyoninae is one of three subfamilies in the kingfisher family Alcedinidae. The other two are Alcedininae and Cerylinae. The subfamily contains around 70 species divided into 12 genera. A molecular study published in 2017 found that the genera Dacelo and Actenoides as currently defined are paraphyletic. The shovel-billed kookaburra in the monotypic genus Clytoceyx sits within Dacelo and the glittering kingfisher in the monotypic genus Caridonax lies within Actenoides. List of species Genus Actenoides Green-backed kingfisher, Actenoides monachus Scaly-breasted kingfisher, Actenoides princeps Moustached kingfisher, Actenoides bougainvillei Spotted wood kingfisher, Actenoides lindsayi Blue-capped kingfisher, Actenoides hombroni Rufous-collared kingfisher, Actenoides concretus Genus Melidora Hook-billed kingfisher, Melidora macrorrhina Genus Lacedo Banded kingfisher, Lacedo pulchella Genus Tanysiptera, paradise kingfishers Common paradise kingfisher, Tanysiptera galatea Kofiau paradise kingfisher, Tanysiptera ellioti Biak paradise kingfisher, Tanysiptera riedelii Numfor paradise kingfisher, Tanysiptera carolinae Little paradise kingfisher, Tanysiptera hydrocharis Buff-breasted paradise kingfisher, Tanysiptera sylvia Black-capped paradise kingfisher, Tanysiptera nigriceps Red-breasted paradise kingfisher, Tanysiptera nympha Brown-headed paradise kingfisher, Tanysiptera danae Genus Cittura Sulawesi lilac kingfisher, Cittura cyanotis Sangihe lilac kingfisher, Cittura sanghirensis Genus Dacelo, kookaburras Laughing kookaburra, Dacelo novaeguineae Blue-winged kookaburra, Dacelo leachii Spangled kookaburra, Dacelo tyro Shovel-billed kookaburra, Dacelo rex Rufous-bellied kookaburra, Dacelo gaudichaud Genus Caridonax White-rumped kingfisher, Caridonax fulgidus Genus Pelargopsis Stork-billed kingfisher, Pelargopsis capensis Great-billed kingfisher, Pelargopsis melanorhyncha Brown-winged kingfisher, Pelargopsis amauroptera Genus Halcyon Ruddy kingfisher, Halcyon coromanda White-throated kingfisher, Halcyon smyrnensis Brown-breasted kingfisher, Halcyon gularis Javan kingfisher, Halcyon cyanoventris Chocolate-backed kingfisher, Halcyon badia Black-capped kingfisher, Halcyon pileata Grey-headed kingfisher, Halcyon leucocephala Brown-hooded kingfisher, Halcyon albiventris Striped kingfisher, Halcyon chelicuti Blue-breasted kingfisher, Halcyon malimbica Woodland kingfisher, Halcyon senegalensis Mangrove kingfisher, Halcyon senegaloides Genus Todirhamphus Blue-black kingfisher, Todirhamphus nigrocyaneus Rufous-lored kingfisher, Todirhamphus winchelli Blue-and-white kingfisher, Todirhamphus diops Lazuli kingfisher, Todirhamphus lazuli Forest kingfisher, Todirhamphus macleayii White-mantled kingfisher, Todirhamphus albonotatus Ultramarine kingfisher, Todirhamphus leucopygius Vanuatu kingfisher, Todirhamphus farquhari Sombre kingfisher, Todirhamphus funebris Collared kingfisher, Todirhamphus chloris Torresian kingfisher, Todirhamphus sordidus Islet kingfisher, Todirhamphus colonus Mariana kingfisher, Todirhamphus albicilla Melanesian kingfisher, Todirhamphus tristrami Pacific kingfisher, Todirhamphus sacer Talaud kingfisher, Todirhamphus enigma Guam kingfisher, Todirhamphus cinnamominus Rusty-capped kingfisher, Todiramphus pelewensis Pohnpei kingfisher, Todiramphus reichenbachii Beach kingfisher, Todirhamphus saurophaga Sacred kingfisher, Todirhamphus sanctus Flat-billed kingfisher, Todirhamphus recurvirostris Cinnamon-banded kingfisher, Todirhamphus australasia Chattering kingfisher, Todirhamphus tuta Mewing kingfisher, Todirhamphus ruficollaris Society kingfisher, Todirhamphus veneratus Mangareva kingfisher, Todirhamphus gambieri Niau kingfisher, Todirhamphus gertrudae Marquesan kingfisher, Todirhamphus godeffroyi Red-backed kingfisher, Todirhamphus pyrrhopygia Genus Syma Yellow-billed kingfisher, Syma torotoro Mountain kingfisher, Syma megarhyncha Description Kingfishers are short-tailed, large-headed, compact birds with long, pointed bills. Like other Coraciiformes, they are brightly coloured. The tree kingfishers are medium to large species, mostly typical kingfishers in appearance, although shovel-billed kookaburra has a huge conical bill, and the Tanysiptera paradise kingfishers have long tail streamers. Some species, notably the kookaburras, show sexual dimorphism. Distribution and habitat Most tree kingfishers are found in the warm climates of Africa, southern and southeast Asia, and Australasia. No members of this family are found in the Americas. The origin of the family is thought to have been in tropical Australasia, which still has the most species. Tree kingfishers use a range of habitats from tropical rainforest to open woodlands and thornbush country. Many are not closely tied to water, and can be found in arid areas of Australia and Africa. Breeding Tree kingfishers are monogamous and territorial, although some species, including three kookaburras, have a cooperative breeding system involving young from earlier broods. The nest is a tree hole, either natural, and old woodpecker nest, or excavated in soft or rotting wood by the kingfishers. Several species dig holes in termite nests. No nest material is added, although litter may build up over the years. Both parents incubate the eggs and feed the chicks. Egg laying is staggered at one-day intervals so that if food is short, only the older, larger nestlings get fed. The chicks are naked, blind, and helpless when they hatch, and stand on their heels, unlike adults. Feeding Although some tree kingfishers, such as the black-capped kingfisher, frequent wetlands, none are specialist fishers. Most species are watch-and-wait hunters which dive onto prey from a perch, mainly taking slow-moving invertebrates or small vertebrates. The shovel-billed kookaburra digs through leaf litter for worms and other prey, and the Vanuatu kingfisher feeds exclusively on insects and spiders. Several other western Pacific species are also mainly insectivorous and flycatch for prey. As with the other kingfisher families, insectivorous species tend to have flattened, red bills to assist in the capture of insects.
Biology and health sciences
Coraciiformes
Animals
22037519
https://en.wikipedia.org/wiki/Springtail
Springtail
Springtails (class Collembola) form the largest of the three lineages of modern hexapods that are no longer considered insects. Although the three lineages are sometimes grouped together in a class called Entognatha because they have internal mouthparts, they do not appear to be any more closely related to one another than they are to all insects, which have external mouthparts. Springtails are omnivorous, free-living organisms that prefer moist conditions. They do not directly engage in the decomposition of organic matter, but contribute to it indirectly through the fragmentation of organic matter and the control of soil microbial communities. The word Collembola is from the ancient Greek "glue" and "peg"; this name was given due to the existence of the collophore, which was previously thought to stick to surfaces to stabilize the creature. Early DNA sequence studies suggested that Collembola represent a separate evolutionary line from the other Hexapoda, but others disagree; this seems to be caused by widely divergent patterns of molecular evolution among the arthropods. The adjustments of traditional taxonomic rank for springtails reflect the occasional incompatibility of traditional groupings with modern cladistics: when they were included with the insects, they were ranked as an order; as part of the Entognatha, they are ranked as a subclass. If they are considered a basal lineage of Hexapoda, they are elevated to full class status. Morphology Members of the Collembola are normally less than long, have six or fewer abdominal segments, and possess a tubular appendage (the collophore or ventral tube) with reversible, sticky vesicles, projecting ventrally from the first abdominal segment. It is believed to be associated with fluid uptake and balance, excretion, and orientation of the organism itself. Most species have an abdominal, tail-like appendage known as a furcula (or furca). It is located on the fourth abdominal segment of springtails and is folded beneath the body, held under tension by a small structure called the retinaculum (or tenaculum). When released, it snaps against the substrate, flinging the springtail into the air and allowing for rapid evasion and travel. All of this takes place in as little as 18 milliseconds. Springtails also possess the ability to reduce their body size by as much as 30% through subsequent ecdyses (moulting) if temperatures rise high enough. The shrinkage is genetically controlled. Since warmer conditions increase metabolic rates and energy requirements in organisms, the reduction in body size is advantageous to their survival. The Poduromorpha and Entomobryomorpha have an elongated body, while the Symphypleona and Neelipleona have a globular body. Collembola lack a tracheal respiration system, which forces them to respire through a porous cuticle, except for the two families Sminthuridae and Actaletidae, which exhibit a single pair of spiracles between the head and the thorax, leading to a rudimentary, although fully functional, tracheal system. The anatomical variance present between different species partially depends on soil morphology and composition. Surface-dwellers are generally larger, have darker pigments, have longer antennae and functioning furcula. Sub-surface-dwellers are usually unpigmented, have elongated bodies, and reduced furcula. They can be categorized into four main forms according to soil composition and depth: atmobiotic, epedaphic, hemiedaphic, and euedaphic. Atmobiotic species inhabit macrophytes and litter surfaces. They are generally 8-10 millimeters (about ⅓") in length, pigmented, have long limbs, and a full set of ocelli (photoreceptors). Epedaphic species inhabit upper litter layers and fallen logs. They are slightly smaller and have less pronounced pigments, as well as less developed limbs and ocelli than the atmobiotic species. Hemiedaphic species inhabit the lower litter layers of decomposing organic material. They are 1-2 millimeters (about 1/16") in length, have dispersed pigmentation, shortened limbs, and a reduced number of ocelli. Euedaphic species inhabit upper mineral layers known as the humus horizon. They are smaller than hemiedaphic species; have soft, elongated bodies; lack pigmentation and ocelli; and have reduced or absent furca. Poduromorphs are characterized by their elongated bodies and conspicuous segmentation – three thoracic segments, six abdominal segments, including a well-developed prothorax with tergal chaetae, while the first thoracic segment in Entomobryomorpha is clearly reduced and bears no chaetae. The digestive tract of springtails consists of three main components: the foregut, midgut, and hindgut. The midgut is surrounded by a network of muscles and lined with a monolayer of columnar or cuboidal cells. Its function is to mix and transport food from the lumen into the hindgut through contraction. Many species of syntrophic bacteria, archaea, and fungi are present in the lumen. These different digestive regions have varying pH to support specific enzymatic activities and microbial populations. The anterior portion of the midgut and hindgut is slightly acidic (with a pH of approximately 6.0) while the posterior midgut portion is slightly alkaline (with a pH of approximately 8.0). Between the midgut and hindgut is an alimentary canal called the pyloric region, which is a muscular sphincter. Malpighian tubules are absent. Systematics and evolution Traditionally, the springtails were divided into the orders Arthropleona, Symphypleona, and occasionally also Neelipleona. The Arthropleona were divided into two superfamilies, the Entomobryoidea and the Poduroidea. However, recent phylogenetic studies show Arthropleona is paraphyletic. Thus, the Arthropleona are abolished in modern classifications, and their superfamilies are raised in rank accordingly, being now orders Entomobryomorpha and the Poduromorpha. Technically, the Arthropleona are thus a partial junior synonym of the Collembola. The term "Neopleona" is essentially synonymous with Symphypleona + Neelipleona. The Neelipleona was originally seen as a particularly advanced lineage of Symphypleona, based on the shared global body shape, but the global body of the Neelipleona is realized in a completely different way than in Symphypleona. Subsequently, the Neelipleona were considered as being derived from the Entomobryomorpha. Analysis of 18S and 28S rRNA sequence data, though, suggests that they form the most ancient lineage of springtails, which would explain their peculiar apomorphies. This phylogenetic relationship was also confirmed using a phylogeny based on mtDNA and whole-genome data. The latest whole-genome phylogeny supporting four orders of Collembola: Springtails are attested to since the Early Devonian. The fossil from , Rhyniella praecursor, is the oldest terrestrial arthropod, and was found in the famous Rhynie chert of Scotland. Given its morphology resembles extant species quite closely, the radiation of the Hexapoda can be situated in the Silurian, or more. Additional research concerning the coprolites (fossilized feces) of ancient springtails allowed researchers to track their lineages back some 412 million years. Fossil Collembola are rare. Instead, most are found in amber. Even these are rare and many amber deposits carry few or no collembola. The best deposits are from the early Eocene of Canada and Europe, Miocene of Central America, and the mid-Cretaceous of Burma and Canada. They display some unexplained characteristics: first, all but one of the fossils from the Cretaceous belong to extinct genera, whereas none of the specimens from the Eocene or the Miocene are of extinct genera; second, the species from Burma are more similar to the modern fauna of Canada than are the Canadian Cretaceous specimens. There are about 3,600 different species. Ecology Eating behavior Specific feeding strategies and mechanisms are employed to match specific niches. Herbivorous and detritivorous species fragment biological material present in soil and leaf litter, supporting decomposition and increasing the availability of nutrients for various species of microbes and fungi. Carnivorous species maintain populations of small invertebrates such as nematodes, rotifers, and other collembolan species. Springtails commonly consume fungal hyphae and spores, but also have been found to consume plant material and pollen, animal remains, colloidal materials, minerals and bacteria. Predators Springtails are consumed by mesostigmatan mites in various families, including Ascidae, Laelapidae, Parasitidae, Rhodacaridae and Veigaiidae. Cave-dwelling springtails are a food source for spiders and harvestmen in the same environment, such as the endangered harvestman Texella reyesi. To protect themselves, some species have evolved chemical defenses. Distribution Springtails are cryptozoa frequently found in leaf litter and other decaying material, where they are primarily detritivores and microbivores, and one of the main biological agents responsible for the control and the dissemination of soil microorganisms. In a mature deciduous woodland in temperate climate, leaf litter and vegetation typically support 30 to 40 species of springtails, and in the tropics the number may be over 100. In sheer numbers, they are reputed to be one of the most abundant of all macroscopic animals, with estimates of 100,000 individuals per square meter of ground, essentially everywhere on Earth where soil and related habitats (moss cushions, fallen wood, grass tufts, ant and termite nests) occur. Only nematodes, crustaceans, and mites are likely to have global populations of similar magnitude, and each of those groups except mites is more inclusive. Though taxonomic rank cannot be used for absolute comparisons, it is notable that nematodes are a phylum and crustaceans a subphylum. Most springtails are small and difficult to see by casual observation, but one springtail, the so-called snow flea (Hypogastrura nivicola), is readily observed on warm winter days when it is active and its dark color contrasts sharply with a background of snow. In addition, a few species routinely climb trees and form a dominant component of canopy fauna, where they may be collected by beating or insecticide fogging. These tend to be the larger (>2 mm) species, mainly in the genera Entomobrya and Orchesella, though the densities on a per square meter basis are typically 1–2 orders of magnitude lower than soil populations of the same species. In temperate regions, a few species (e.g. Anurophorus spp., Entomobrya albocincta, Xenylla xavieri, Hypogastrura arborea) are almost exclusively arboreal. In tropical regions a single square meter of canopy habitat can support many species of Collembola. The main ecological factor driving the local distribution of species is the vertical stratification of the environment: in woodland a continuous change in species assemblages can be observed from tree canopies to ground vegetation then to plant litter down to deeper soil horizons. This is a complex factor embracing both nutritional and physiological requirements, together with behavioural trends, dispersal limitation and probable species interactions. Some species have been shown to exhibit negative or positive gravitropism, which adds a behavioural dimension to this still poorly understood vertical segregation. Experiments with peat samples turned upside down showed two types of responses to disturbance of this vertical gradient, called "stayers" and "movers". As a group, springtails are highly sensitive to desiccation, because of their tegumentary respiration, although some species with thin, permeable cuticles have been shown to resist severe drought by regulating the osmotic pressure of their body fluid. The gregarious behaviour of Collembola, mostly driven by the attractive power of pheromones excreted by adults, gives more chance to every juvenile or adult individual to find suitable, better protected places, where desiccation could be avoided and reproduction and survival rates (thereby fitness) could be kept at an optimum. Sensitivity to drought varies from species to species and increases during ecdysis. Given that springtails moult repeatedly during their entire life (an ancestral character in Hexapoda) they spend much time in concealed micro-sites where they can find protection against desiccation and predation during ecdysis, an advantage reinforced by synchronized moulting. The high humidity environment of many caves also favours springtails and there are numerous cave adapted species, including one, Plutomurus ortobalaganensis living down the Krubera Cave. The horizontal distribution of springtail species is affected by environmental factors which act at the landscape scale, such as soil acidity, moisture and light. Requirements for pH can be reconstructed experimentally. Altitudinal changes in species distribution can be at least partly explained by increased acidity at higher elevation. Moisture requirements, among other ecological and behavioural factors, explain why some species cannot live aboveground, or retreat in the soil during dry seasons, but also why some epigeal springtails are always found in the vicinity of ponds and lakes, such as the hygrophilous Isotomurus palustris. Adaptive features, such as the presence of a fan-like wettable mucro, allow some species to move at the surface of water in freshwater and marine environments. Podura aquatica, a unique representative of the family Poduridae (and one of the first springtails to have been described by Carl Linnaeus), spends its entire life at the surface of water, its wettable eggs dropping in water until the non-wettable first instar hatches then surfaces. A few genera are capable of being submerged, and after molting young springtails lose their water repellent properties and are able to survive submerged under water. In a variegated landscape, made of a patchwork of closed (woodland) and open (meadows, cereal crops) environments, most soil-dwelling species are not specialized and can be found everywhere, but most epigeal and litter-dwelling species are attracted to a particular environment, either forested or not. As a consequence of dispersal limitation, landuse change, when too rapid, may cause the local disappearance of slow-moving, specialist species, a phenomenon the measure of which has been called colonisation credit. Relationship with humans Springtails are well known as pests of some agricultural crops. Sminthurus viridis, the lucerne flea, has been shown to cause severe damage to agricultural crops, and is considered as a pest in Australia. Onychiuridae are also known to feed on tubers and to damage them to some extent. However, by their capacity to carry spores of mycorrhizal fungi and mycorrhiza helper bacteria on their tegument, soil springtails play a positive role in the establishment of plant-fungal symbioses and thus are beneficial to agriculture. They also contribute to controlling plant fungal diseases through their active consumption of mycelia and spores of damping-off and pathogenic fungi. It has been suggested that they could be reared to be used for the control of pathogenic fungi in greenhouses and other indoor cultures. Various sources and publications have suggested that some springtails may parasitize humans, but this is entirely inconsistent with their biology, and no such phenomenon has ever been scientifically confirmed, though it has been documented that the scales or hairs from springtails can cause irritation when rubbed onto the skin. They may sometimes be abundant indoors in damp places such as bathrooms and basements, and incidentally found on one's person. More often, claims of persistent human skin infection by springtails may indicate a neurological problem, such as delusional parasitosis, a psychological rather than entomological problem. Researchers themselves may be subject to psychological phenomena. For example, a publication in 2004 claiming that springtails had been found in skin samples was later determined to be a case of pareidolia; that is, no springtail specimens were actually recovered, but the researchers had digitally enhanced photos of sample debris to create images resembling small arthropod heads, which then were claimed to be springtail remnants. However, Steve Hopkin reports one instance of an entomologist aspirating an Isotoma species and in the process accidentally inhaling some of their eggs, which hatched in his nasal cavity and made him quite ill until they were flushed out. In 1952, China accused the United States military of spreading bacteria-laden insects and other objects during the Korean War by dropping them from P-51 fighters above rebel villages over North Korea. In all, the U.S. was accused of dropping ants, beetles, crickets, fleas, flies, grasshoppers, lice, springtails, and stoneflies as part of a biological warfare effort. The alleged associated diseases included anthrax, cholera, dysentery, fowl septicemia, paratyphoid, plague, scrub typhus, small pox, and typhoid. China created an international scientific commission for investigating possible bacterial warfare, eventually ruling that the United States probably did engage in limited biological warfare in Korea. The US government denied all the allegations, and instead proposed that the United Nations send a formal inquiry committee to China and Korea, but China and Korea refused to cooperate. U.S. and Canadian entomologists further claimed that the accusations were ridiculous and argued that anomalous appearances of insects could be explained through natural phenomena. Springtail species cited in allegations of biological warfare in the Korean War were Isotoma (Desoria) negishina (a local species) and the "white rat springtail" Folsomia candida. Captive springtails are often kept in a terrarium as part of a clean-up crew. Ecotoxicology laboratory animals Springtails are currently used in laboratory tests for the early detection of soil pollution. Acute and chronic toxicity tests have been performed by researchers, mostly using the parthenogenetic isotomid Folsomia candida. These tests have been standardized. Details on a ringtest, on the biology and ecotoxicology of Folsomia candida and comparison with the sexual nearby species Folsomia fimetaria (sometimes preferred to Folsomia candida) are given in a document written by Paul Henning Krogh. Care should be taken that different strains of the same species may be conducive to different results. Avoidance tests have been also performed. They have been standardized, too. Avoidance tests are complementary to toxicity tests, but they also offer several advantages: they are more rapid (thus cheaper), more sensitive and they are environmentally more reliable, because in the real world Collembola move actively far from pollution spots. It may be hypothesized that the soil could become locally depauperated in animals (and thus improper to normal use) while below thresholds of toxicity. Contrary to earthworms, and like many insects and molluscs, Collembola are very sensitive to herbicides and thus are threatened in no-tillage agriculture, which makes a more intense use of herbicides than conventional agriculture. The springtail Folsomia candida is also becoming a genomic model organism for soil toxicology. With microarray technology the expression of thousands of genes can be measured in parallel. The gene expression profiles of Folsomia candida exposed to environmental toxicants allow fast and sensitive detection of pollution, and additionally clarifies molecular mechanisms causing toxicology. Collembola have been found to be useful as bio-indicators of soil quality. Laboratory studies have been conducted that validated that the jumping ability of springtails can be used to evaluate the soil quality of Cu- and Ni-polluted sites. Climate warming impact In polar regions that are expected to experience among the most rapid impact from climate warming, springtails have shown contrasting responses to warming in experimental warming studies. There are negative, positive and neutral responses reported. Neutral responses to experimental warming have also been reported in studies of non-polar regions. The importance of soil moisture has been demonstrated in experiments using infrared heating in an alpine meadow, which had a negative effect on mesofauna biomass and diversity in drier parts and a positive effect in moist sub-areas. Furthermore, a study with 20 years of experimental warming in three contrasting plant communities found that small scale heterogeneity may buffer springtails to potential climate warming. Reproduction Sexual reproduction occurs through the clustered or scattered deposition of spermatophores by male adults. Stimulation of spermatophore deposition by female pheromones has been demonstrated in Sinella curviseta. Mating behavior can be observed in Symphypleona. Among Symphypleona, males of some Sminthuridae use a clasping organ located on their antenna. Many springtails, mostly those living in deeper soil horizons, are parthenogenetic, which favors reproduction to the detriment of genetic diversity and thereby to population tolerance of environmental hazards. Parthenogenesis (also called thelytoky) is under the control of symbiotic bacteria of the genus Wolbachia, which live, reproduce and are carried in female reproductive organs and eggs of Collembola. Feminizing Wolbachia species are widespread in arthropods and nematodes, where they co-evolved with most of their lineages.
Biology and health sciences
Insects and other hexapods
null
22039971
https://en.wikipedia.org/wiki/Tur%C3%A1n%27s%20brick%20factory%20problem
Turán's brick factory problem
In the mathematics of graph drawing, Turán's brick factory problem asks for the minimum number of crossings in a drawing of a complete bipartite graph. The problem is named after Pál Turán, who formulated it while being forced to work in a brick factory during World War II. A drawing method found by Kazimierz Zarankiewicz has been conjectured to give the correct answer for every complete bipartite graph, and the statement that this is true has come to be known as the Zarankiewicz crossing number conjecture. The conjecture remains open, with only some special cases solved. Origin and formulation During World War II, Hungarian mathematician Pál Turán was forced to work in a brick factory, pushing wagon loads of bricks from kilns to storage sites. The factory had tracks from each kiln to each storage site, and the wagons were harder to push at the points where tracks crossed each other. Turán was inspired by this situation to ask how the factory might be redesigned to minimize the number of crossings between these tracks. Mathematically, this problem can be formalized as asking for a graph drawing of a complete bipartite graph, whose vertices represent kilns and storage sites, and whose edges represent the tracks from each kiln to each storage site. The graph should be drawn in the plane with each vertex as a point, each edge as a curve connecting its two endpoints, and no vertex placed on an edge that it is not incident to. A crossing is counted whenever two edges that are disjoint in the graph have a nonempty intersection in the plane. The question is then, what is the minimum number of crossings in such a drawing? Turán's formulation of this problem is often recognized as one of the first studies of the crossing numbers of graphs. (Another independent formulation of the same concept occurred in sociology, in methods for drawing sociograms, and a much older puzzle, the three utilities problem, can be seen as a special case of the brick factory problem with three kilns and three storage facilities.) Crossing numbers have since gained greater importance, as a central object of study in graph drawing and as an important tool in VLSI design and discrete geometry. Upper bound Both Zarankiewicz and Kazimierz Urbanik saw Turán speak about the brick factory problem in different talks in Poland in 1952, and independently published attempted solutions of the problem, with equivalent formulas for the number of crossings. As both of them showed, it is always possible to draw the complete bipartite graph (a graph with vertices on one side, vertices on the other side, and edges connecting the two sides) with a number of crossings equal to The construction is simple: place vertices on the -axis of the plane, avoiding the origin, with equal or nearly-equal numbers of points to the left and right of the -axis. Similarly, place vertices on the -axis of the plane, avoiding the origin, with equal or nearly-equal numbers of points above and below the -axis. Then, connect every point on the -axis by a straight line segment to every point on the -axis. However, their proofs that this formula is optimal, that is, that there can be no drawings with fewer crossings, were erroneous. The gap was not discovered until eleven years after publication, nearly simultaneously by Gerhard Ringel and Paul Kainen. Nevertheless, it is conjectured that Zarankiewicz's and Urbanik's formula is optimal. This has come to be known as the Zarankiewicz crossing number conjecture. Although some special cases of it are known to be true, the general case remains open. Lower bounds Since and are isomorphic, it is enough to consider the case where . In addition, for Zarankiewicz's construction gives no crossings, which of course cannot be bested. So the only nontrivial cases are those for which and are both . Zarankiewicz's attempted proof of the conjecture, although invalid for the general case of , works for the case . It has since been extended to other small values of , and the Zarankiewicz conjecture is known to be true for the complete bipartite graphs with . The conjecture is also known to be true for , , and . If a counterexample exists, that is, a graph requiring fewer crossings than the Zarankiewicz bound, then in the smallest counterexample both and must be odd. For each fixed choice of , the truth of the conjecture for all can be verified by testing only a finite number of choices of . More generally, it has been proven that every complete bipartite graph requires a number of crossings that is (for sufficiently large graphs) at least 83% of the number given by the Zarankiewicz bound. Closing the gap between this lower bound and the upper bound remains an open problem. Rectilinear crossing numbers If edges are required to be drawn as straight line segments, rather than arbitrary curves, then some graphs need more crossings than they would when drawn with curved edges. However, the upper bound established by Zarankiewicz for the crossing numbers of complete bipartite graphs can be achieved using only straight edges. Therefore, if the Zarankiewicz conjecture is correct, then the complete bipartite graphs have rectilinear crossing numbers equal to their crossing numbers.
Mathematics
Graph theory
null
26382213
https://en.wikipedia.org/wiki/Tilikum%20%28orca%29
Tilikum (orca)
Tilikum ( – 6 January 2017), nicknamed Tilly, was a captive male orca who spent most of his life at SeaWorld Orlando in Florida. He was captured in Iceland in 1983; about a year later, he was transferred to Sealand of the Pacific near Victoria, British Columbia, Canada. He was subsequently transferred in 1992 to SeaWorld in Orlando, Florida, where he sired 21 calves throughout his life. Tilikum was heavily featured in CNN Films' 2013 documentary Blackfish, which claims that orcas in captivity suffer psychological damage and become unnaturally aggressive. Of the four fatal attacks by orcas in captivity, Tilikum was involved in three: Keltie Byrne, a trainer at the now-defunct Sealand of the Pacific; Daniel P. Dukes, a man trespassing in SeaWorld Orlando; and SeaWorld trainer Dawn Brancheau. Description Tilikum was the largest orca in captivity. He measured in length and weighed about . His pectoral fins were long, his fluke curled under, and his dorsal fin was collapsed completely to his left side. His name, in the Chinook Jargon of the Pacific Northwest, means "friends, relations, tribe, nation, common people". Life Origin Tilikum was captured when he was two years old, along with two other young orcas, by a purse-seine net in November 1983, at Berufjörður in eastern Iceland. After almost a year in a tank at the Hafnarfjördur Marine Zoo, he was transferred to Sealand of the Pacific, in Oak Bay, a suburb of the city of Victoria on Vancouver Island, Canada. At Sealand, he lived with two older female orcas named Haida II and Nootka IV. As both orcas were pregnant, Haida II and Nootka IV behaved aggressively towards Tilikum, including forcing him into a smaller medical pool where trainers kept him for protection. Fatalities While orca attacks on humans in the wild are rare, and no fatal attacks have been recorded, as of 2024 four humans have died due to interactions with captive orcas. Tilikum was involved in three of those deaths. First death Keltie Lee Byrne (December 6, 1970February 20, 1991) was a 20-year-old Canadian student, animal trainer, and competitive swimmer. She had been working with orcas Tilikum, Nootka IV, and Haida II at Sealand of the Pacific to earn extra money. On February 20, 1991, Byrne was working a shift when she slipped and fell into the whale pool. Witnesses recalled that Byrne screamed and panicked after realizing that one of the whales was holding her foot and dragging her underwater. According to the coroner's report, rescue attempts were thwarted by the whales, who refused to let Byrne go even after she was believed to have fallen unconscious in the water. Her corpse was later retrieved with a large net, after which she was determined to be deceased. Her death was ruled an accident. Shortly after the accident, Sealand management made the decision to sell all of its orcas to SeaWorld and, eventually, to close the park entirely. On January 3, 1992, SeaWorld applied to the National Marine Fisheries Service for a temporary emergency permit to bring Tilikum to the United States due to concerns for his health. He had been the subject of systematic aggression from Nootka and Haida after the latter gave birth to a calf, Kyuquot, on December 24, 1991, and was confined in a small medical pool that was only slightly larger than he was. The application was approved on January 8, 1992, and Tilikum was immediately moved to SeaWorld Orlando. Byrne's death attracted renewed attention after the 2010 death of SeaWorld trainer Dawn Brancheau and the 2013 documentary Blackfish, which discusses Tilikum's involvement in Byrne's death as well as the deaths of Daniel P. Dukes and later Brancheau. The latter two deaths occurred after Tilikum had been sold by Sealand of the Pacific to SeaWorld. Experts interviewed for Blackfish stated that it was unclear what drove Tilikum and the other whales to attack Byrne, but suggested that years of abuse and cruelty towards Tilikum, including allowing the other whales to rake Tilikum's skin with their teeth until he bled, had made him an aggressive whale. Steve Huxter, head of animal training at Sealand at the time, said, "They never had a plaything in the pool that was so interactive. They just got incredibly excited and stimulated." No official motive of the three whales has ever been established, as the case was over twenty years old by the time it resurfaced in relation to the death of Dawn Brancheau. Second death Daniel P. Dukes was a 27-year-old man from South Carolina and his death was the second of three attributed to Tilikum. SeaWorld claimed that Dukes was a vagrant who climbed into Tilikum's pool and drowned, while the coroner's report, along with animal rights advocates for Tilikum, have pointed out that Dukes's corpse was found severely mutilated by the whale. Dukes was generally regarded by the media as a trespasser and nuisance rather than a direct victim of Tilikum, although this perception has been challenged with the release of the documentary Blackfish. Little has been published in the media regarding the early life of Dukes. A known drifter with a love of nature and environmentalism, he was known for acts of petty theft and general vagrancy. These details were often brought up by SeaWorld. At some point on the night on July 6, 1999, Dukes, who had hidden inside the park after it closed, went to the whale pool where Tilikum was held. The following morning, his body was discovered in the water by SeaWorld staff, draped over Tilikum's backside as the whale swam around. As SeaWorld claims to have no security tape footage of the pool on that night, it is unclear exactly what transpired. According to the Orange County Sheriff's Office (OCSO) report, a 911 call was received from SeaWorld at 7:25 a.m., at almost the exact time that Dukes's body was spotted. OCSO immediately dispatched Detective Calhoun who arrived at SeaWorld eight minutes later. Dukes's corpse was retrieved and later identified. Dukes's parents filed a lawsuit against SeaWorld two months after their son's death. The lawsuit was later dropped. The 2013 documentary Blackfish was the first media to explore Dukes's death extensively. The lack of early coverage of his death later became noted for the way that the media and investigators handle the deaths of homeless and mentally ill individuals, particularly the lack of dignity ascribed to such cases. The Dolphin Project argued against SeaWorld's unflattering description of Dukes as a filthy man with poor hygiene spotted at the park mumbling oddly to himself, stating that "Daniel Dukes was a troubled individual with a history of petty thefts, and questionable decisions but as a human being, no death is meaningless. Unwittingly, Dukes will forever be remembered as Tilikum's second victim and SeaWorld's first major incident." The case of Dukes's death has become a frequent example in arguments over the welfare of marine mammals in captivity. Former marine mammal trainer Ric O'Barry argued that Dukes was probably not near Tilikum's tank with any form of malicious intent, but instead that the nature-loving man was "fascinated" by the whale and wanted to visit it. He further argued, "I think the whale probably pulled [Dukes] down, held him underwater. I don't think they know how often we breathe. The problem is that the whales have nothing better to do," O'Barry explains. "They're bored. We literally bore them to death. It's like you living in the bathroom for your life." Third death On February 24, 2010, Tilikum killed Dawn Brancheau, a 40-year-old SeaWorld trainer. Brancheau was killed following a Dine with Shamu show. The veteran trainer was rubbing Tilikum as part of a post-show routine when the orca grabbed her and pulled her into the water. SeaWorld stated that Tilikum had grabbed Brancheau by her ponytail, although some witnesses reported seeing him grab her by the arm or shoulder. He scalped her, then bit off her arm during the attack. Brancheau's autopsy indicated death by drowning and blunt force trauma. Brancheau's death resulted in a contentious legal case over the safety of working with orcas and the ethics of keeping live whales and other marine mammals in captivity. Return to performing Tilikum returned to performing on March 30, 2011. High-pressure water hoses were used to massage him, rather than hands, and removable guardrails were used on the platforms, as OSHA restricted close contact between orcas and trainers, and reinforced workplace safety precautions after Brancheau's death. He was paired with his grandson Trua and was often seen performing alongside him during the finale of the new One Ocean show. He had on occasion been kept with his daughter Malia, or both Trua and Malia at the same time. In December 2011, he was put on hiatus from the shows following an undisclosed illness, and resumed performing in April 2012. Declining health and death SeaWorld announced in March 2016 that Tilikum's health was deteriorating, and it was thought he had a lung infection due to bacterial pneumonia. In May 2016, it was reported Tilikum's health was improving. On January 6, 2017, SeaWorld announced that Tilikum had died early in the morning. The cause of death was reported as a bacterial infection. Offspring Tilikum sired 21 offspring in captivity, seven of which are alive as of April 2024. While at Sealand of the Pacific, Tilikum sired his first calf when he was about eight or nine years old. His first son, Kyuquot, was born to Haida II on December 24, 1991. Kyuquot and his mother were transferred to SeaWorld San Antonio in January 1993, a year after Tilikum was moved to Seaworld Orlando. Kyuquot has remained at the San Antonio park ever since. Following his arrival at SeaWorld, Tilikum sired many calves with many different females. His first calf born in Orlando was to Katina. Katina gave birth to Taku on September 9, 1993. Taku died on October 17, 2007. Among Tilikum's other offspring are: Nyar (1993–1996), Unna (1996–2015), Sumar (1998–2010), Tuar (1999), Tekoa (2000), Nakai (2001–2022), Kohana (2002–2022), Ikaika (2002), Skyla (2004–2021), Malia (2007), Sakari (2010) and Makaio (2010). In 1999, Tilikum began training for artificial insemination. In early 2000, Kasatka who resided at SeaWorld San Diego was artificially inseminated using his sperm. She gave birth to Tilikum's son, Nakai, on September 1, 2001. On May 3, 2002, another female in San Diego, named Takara, bore Tilikum's calf through artificial insemination. Tilikum was also the first successful, surviving grandfather orca in captivity with the births of Trua (2005), Nalani (2006), Adán (2010) and Victoria (2012–2013). Controversy On December 7, 2010, TMZ reported that SeaWorld's president, Terry Prather, received a letter from PETA and Mötley Crüe member Tommy Lee referencing SeaWorld's announcement regarding limiting human contact with Tilikum. In the letter, Lee refers to Tilikum as SeaWorld's "Chief sperm bank" and asserts that the relevant process constitutes continued human contact. The letter implores SeaWorld to release Tilikum from his tank, stating, "I hope it doesn't take another tragic death for SeaWorld to realize it shouldn't frustrate these smart animals by keeping them [confined] in tanks." On December 8, 2010, the SeaWorld VP of Communications responded to Lee's letter via E! News, stating that PETA's facts were not only inaccurate, but that SeaWorld trainers also "do not now, nor have they ever entered the water with Tilikum for this purpose". Tilikum and the captivity of orcas is the main subject of the documentary film Blackfish, which premiered at the Sundance Film Festival in January 2013 and caused a drop in SeaWorld attendance and revenue. The film and a subsequent online petition led to several popular musical groups cancelling performances at SeaWorld and Busch Gardens' "Bands, Brew & BBQ" event in 2014. In popular culture Books Aside from Blackfish, a number of books have been written about Tilikum: Podcasts On September 6th and 13th, 2024 the popular true crime podcast, The Last Podcast On The Left covered Tilikum's history and incidents in two episodes titled Episode 588- Horrors Of SeaWorld I- The Perfect Killer and Episode 589- Horrors of SeaWorld II- Free Tilly.
Biology and health sciences
Individual animals
Animals
26383679
https://en.wikipedia.org/wiki/Selective%20serotonin%20reuptake%20inhibitor
Selective serotonin reuptake inhibitor
Selective serotonin reuptake inhibitors (SSRIs) are a class of drugs that are typically used as antidepressants in the treatment of major depressive disorder, anxiety disorders, and other psychological conditions. SSRIs increase the extracellular level of the neurotransmitter serotonin by limiting its reabsorption (reuptake) into the presynaptic cell. They have varying degrees of selectivity for the other monoamine transporters, with pure SSRIs having strong affinity for the serotonin transporter and only weak affinity for the norepinephrine and dopamine transporters. SSRIs are the most widely prescribed antidepressants in many countries. The efficacy of SSRIs in mild or moderate cases of depression has been disputed and may or may not be outweighed by side effects, especially in adolescent populations. Medical uses The main indication for SSRIs is major depressive disorder; however, they are frequently prescribed for anxiety disorders, such as social anxiety disorder, generalized anxiety disorder, panic disorder, obsessive–compulsive disorder (OCD), eating disorders, chronic pain, and, in some cases, for posttraumatic stress disorder (PTSD). They are also frequently used to treat depersonalization disorder, although with varying results. Depression Antidepressants are recommended by the UK National Institute for Health and Care Excellence (NICE) as a first-line treatment of severe depression and for the treatment of mild-to-moderate depression that persists after conservative measures such as cognitive therapy. They recommend against their routine use by those who have chronic health problems and mild depression. There has been controversy regarding the efficacy of SSRIs in treating depression depending on its severity and duration. Two meta-analyses published in 2008 (Kirsch) and 2010 (Fournier) found that in mild and moderate depression, the effect of SSRIs is small or none compared to placebo, while in very severe depression the effect of SSRIs is between "relatively small" and "substantial". The 2008 meta-analysis combined 35 clinical trials submitted to the Food and Drug Administration (FDA) before licensing of four newer antidepressants (including the SSRIs paroxetine and fluoxetine, the non-SSRI antidepressant nefazodone, and the serotonin and norepinephrine reuptake inhibitor (SNRI) venlafaxine). The authors attributed the relationship between severity and efficacy to a reduction of the placebo effect in severely depressed patients, rather than an increase in the effect of the medication. Some researchers have questioned the statistical basis of this study suggesting that it underestimates the effect size of antidepressants. A 2012 meta-analysis of fluoxetine and venlafaxine concluded that statistically and clinically significant treatment effects were observed for each drug relative to placebo irrespective of baseline depression severity; some of the authors however disclosed substantial relationships with pharmaceutical industries. A 2017 systematic review stated that "SSRIs versus placebo seem to have statistically significant effects on depressive symptoms, but the clinical significance of these effects seems questionable and all trials were at high risk of bias. Furthermore, SSRIs versus placebo significantly increase the risk of both serious and non-serious adverse events. Our results show that the harmful effects of SSRIs versus placebo for major depressive disorder seem to outweigh any potentially small beneficial effects". Fredrik Hieronymus et al. criticized the review as inaccurate and misleading, but they also disclosed multiple ties to pharmaceutical industries and receipt of speaker's fees. In 2018, a systematic review and network meta-analysis comparing the efficacy and acceptability of 21 antidepressant drugs showed escitalopram to be one of the most effective. They showed that "In terms of efficacy, all antidepressants were more effective than placebo, with odds ratios (ORs) ranging between 2.13 (95% credible interval [CrI] 1.89–2.41) for amitriptyline and 1.37 (1.16–1.63) for reboxetine." The odds ratios were specifically in terms of response rates (≥50% reduction in observer-rated symptoms). Odds ratios of response rates have been criticized for artificially inflating the apparent size of antidepressant benefits. The use of SSRIs in children with depression remains controversial. A 2021 Cochrane review concluded that, for children and adolescents, SSRIs "may reduce depression symptoms in a small and unimportant way compared with placebo." However, it also noted significant methodological limitations that make drawing definitive conclusions about efficacy difficult. Fluoxetine is the only SSRI authorized for use in children and adolescents with moderate to severe depression in the United Kingdom. Social anxiety disorder Some SSRIs are effective for social anxiety disorder, although their effects on symptoms is not always robust and their use is sometimes rejected in favor of psychological therapies. Paroxetine was the first drug to be approved for social anxiety disorder and it is considered effective for this disorder; sertraline and fluvoxamine were later approved for it as well. Escitalopram and citalopram are used off-label with acceptable efficacy, while fluoxetine is not considered to be effective for this disorder. The effect sizes (Cohen's d) of SSRIs in terms of improvement on the Liebowitz social anxiety scale in individual published trials of the drugs for social anxiety disorder have ranged from –0.029 to 1.214. Post-traumatic stress disorder PTSD is relatively hard to treat and generally treatment is not highly effective; SSRIs are no exception. They are not very effective for this disorder and only two SSRIs are FDA approved for this condition: paroxetine and sertraline. Paroxetine has slightly higher response and remission rates for PTSD than sertraline, but both are not fully effective for many patients. Fluoxetine is used off-label, but with mixed results; venlafaxine, an SNRI, is considered somewhat effective, although its use is also off-label. Fluvoxamine, escitalopram and citalopram are not well tested in this disorder. Paroxetine remains the most suitable drug for PTSD as of now, but with limited benefits. Generalized anxiety disorder SSRIs are recommended by the National Institute for Health and Care Excellence (NICE) for the treatment of generalized anxiety disorder (GAD) that has failed to respond to conservative measures such as education and self-help activities. GAD is a common disorder of which the central feature is excessive worry about a number of different events. Key symptoms include excessive anxiety about multiple events and issues, and difficulty controlling worrisome thoughts, that persists for at least 6 months. Antidepressants provide a modest-to-moderate reduction in anxiety in GAD, and are superior to placebo in treating GAD. The efficacy of different antidepressants is similar. Obsessive–compulsive disorder In Canada, SSRIs are a first-line treatment of adult obsessive–compulsive disorder (OCD). In the UK, they are first-line treatment only with moderate to severe functional impairment and as second line treatment for those with mild impairment, though, as of early 2019, this recommendation is being reviewed. In pediatric populations, a 2024 systematic review found that SSRIs have been found effective in reducing symptoms of obsessive-compulsive disorder (OCD). Their efficacy is further enhanced when combined with behavioral therapies such as exposure and response prevention (ERP). In children, SSRIs can be considered a second line therapy in those with moderate-to-severe impairment, with close monitoring for psychiatric adverse effects. SSRIs, especially fluvoxamine, which is the first one to be FDA approved for OCD, are efficacious in its treatment; patients treated with SSRIs are about twice as likely to respond to treatment as those treated with placebo. Efficacy has been demonstrated both in short-term treatment trials of 6 to 24 weeks and in discontinuation trials of 28 to 52 weeks duration. Panic disorder Paroxetine CR was superior to placebo on the primary outcome measure. In a 10-week randomized controlled, double-blind trial escitalopram was more effective than placebo. Fluvoxamine, another SSRI, has shown positive results. However, evidence for their effectiveness and acceptability is unclear. Eating disorders Antidepressants are recommended as an alternative or additional first step to self-help programs in the treatment of bulimia nervosa. SSRIs (fluoxetine in particular) are preferred over other anti-depressants due to their acceptability, tolerability, and superior reduction of symptoms in short-term trials. Long-term efficacy remains poorly characterized. Similar recommendations apply to binge eating disorder. SSRIs provide short-term reductions in binge eating behavior, but have not been associated with significant weight loss. Clinical trials have generated mostly negative results for the use of SSRIs in the treatment of anorexia nervosa. Treatment guidelines from the National Institute of Health and Clinical Excellence recommend against the use of SSRIs in this disorder. Those from the American Psychiatric Association note that SSRIs confer no advantage regarding weight gain, but that they may be used for the treatment of co-existing depression, anxiety, or OCD. Stroke recovery SSRIs have been used off-label in the treatment of stroke patients, including those with and without symptoms of depression. A 2021 meta-analysis of randomized controlled clinical trials found no evidence pointing to their routine use to promote recovery following stroke. Premature ejaculation SSRIs are effective for the treatment of premature ejaculation. Taking SSRIs on a chronic, daily basis is more effective than taking them prior to sexual activity. The increased efficacy of treatment when taking SSRIs on a daily basis is consistent with clinical observations that the therapeutic effects of SSRIs generally take several weeks to emerge. Sexual dysfunction ranging from decreased libido to anorgasmia is usually considered to be a significantly distressing side effect which may lead to noncompliance in patients receiving SSRIs. However, for those with premature ejaculation, this very same side effect becomes the desired effect. Other uses SSRIs such as sertraline have been found to be effective in decreasing anger. Side effects Side effects vary among the individual drugs of this class. They may include akathisia. Sexual dysfunction SSRIs can cause various types of sexual dysfunction such as anorgasmia, erectile dysfunction, diminished libido, genital numbness, and sexual anhedonia (pleasureless orgasm). Sexual problems are common with SSRIs. Poor sexual function is one of the most common reasons people stop the medication. The mechanism by which SSRIs may cause sexual side effects is not well understood . The range of possible mechanisms includes (1) nonspecific neurological effects (e.g., sedation) that globally impair behavior including sexual function; (2) specific effects on brain systems mediating sexual function; (3) specific effects on peripheral tissues and organs, such as the penis, that mediate sexual function; and (4) direct or indirect effects on hormones mediating sexual function. Management strategies include: for erectile dysfunction the addition of a PDE5 inhibitor such as sildenafil; for decreased libido, possibly adding or switching to bupropion; and for overall sexual dysfunction, switching to nefazodone. Buspirone is sometimes used off-label to reduce sexual dysfunction associated with the use of SSRIs. A number of non-SSRI drugs are not associated with sexual side effects (such as bupropion, mirtazapine, tianeptine, agomelatine, tranylcypromine, and moclobemide). Several studies have suggested that SSRIs may adversely affect semen quality. While trazodone (an antidepressant with alpha adrenergic receptor blockade) is a notorious cause of priapism, cases of priapism have also been reported with certain SSRIs (e.g. fluoxetine, citalopram). Post-SSRI sexual dysfunction Post-SSRI sexual dysfunction (PSSD) refers to a set of symptoms reported by some people who have taken SSRIs or other serotonin reuptake-inhibiting (SRI) drugs, in which sexual dysfunction symptoms persist for at least three months after ceasing to take the drug. The status of PSSD as a legitimate and distinct pathology is contentious; several researchers have proposed that it should be recognized as a separate phenomenon from more common SSRI side effects. The reported symptoms of PSSD include reduced sexual desire or arousal, erectile dysfunction in males or loss of vaginal lubrication in females, persistent premature ejaculation (even in patients without a previous history of the condition), difficulty having an orgasm or loss of pleasurable sensation associated with orgasm, and a reduction or loss of sensitivity in the genitals or other erogenous zones. Additional non-sexual symptoms are also commonly described, including emotional numbing, anhedonia, depersonalization or derealization, and cognitive impairment. The duration of PSSD symptoms appears to vary among patients, with some cases resolving in months and others in years or decades; one analysis of patient reports submitted between 1992 and 2021 in the Netherlands listed a case which had reportedly persisted for 23 years. The symptoms of PSSD are largely shared with post-finasteride syndrome (PFS) and post-retinoid sexual dysfunction (PRSD), two other poorly-understood conditions which have been suggested to share a common etiology with PSSD despite being associated with different types of medication. Diagnostic criteria for PSSD were proposed in 2022, but as of 2023, there is no agreement on standards for diagnosis. It is considered a distinct phenomenon from antidepressant discontinuation syndrome, post-acute withdrawal syndrome, and major depressive disorder, and should be distinguished from sexual dysfunction associated with depression and persistent genital arousal disorder. There are limited treatment options for PSSD as of 2023 and no evidence that any individual approach is effective. The mechanism by which SSRIs may induce PSSD is unclear. However, various neurochemical, hormonal, and biochemical changes during SSRI use—such as reduced dopamine levels, increased serotonin, inhibition of nitric oxide synthase, and the blocking of cholinergic and alpha-1 adrenergic receptors—could account for their sexual adverse effects. Additionally, SSRIs may cause peripheral changes by inhibiting serotonin receptors in peripheral nerves, which may also play a role in PSSD. As of 2023, prevalence is unknown. A 2020 review stated that PSSD is rare, underreported, and "increasingly identified in online communities". A 2024 study investigating the prevalence of persistent post-treatment genital numbness among sexual and gender minority youth found 13.2% of SSRI users between the ages 15 and 29 reporting the symptom compared to 0.9% who had used other medications. Reports of PSSD have occurred with almost every SSRI (dapoxetine is an exception). In 2019, the Pharmacovigilance Risk Assessment Committee of the European Medicines Agency (EMA) recommended that packaging leaflets of selected SSRIs and SNRIs should be amended to include information regarding a possible risk of persistent sexual dysfunction. Following the EMA assessment, a safety review by Health Canada "could neither confirm nor rule out a causal link... which was long lasting in rare cases", but recommended that "healthcare professionals inform patients about the potential risk of long-lasting sexual dysfunction despite discontinuation of treatment". A 2023 review stated that ongoing sexual dysfunction after SSRI discontinuation was possible, but that cause and effect were undetermined. The 2023 review cautioned that reports of sexual dysfunction cannot be generalized to wider practice as they are subject to a "high risk of bias", but agreed with the EMA assessment that cautionary labeling on SSRIs was warranted. On the 20th of march of 2024, a lawsuit was filed by the organization Public Citizen, representing Dr. Antonei Csoka, against the United States Food and Drug Administration (FDA) for failing to act on a citizen petition submitted in 2018. The petition seeks to have the risk of serious sexual side effects persisting after discontinuation mentioned in the product labels of SSRIs and SNRIs. Emotional blunting Certain antidepressants may cause emotional blunting, characterized by reduced intensity of both positive and negative emotions as well as symptoms of apathy, indifference, and amotivation. It may be experienced as either beneficial or detrimental depending on the situation. This side effect has been particularly associated with serotonergic antidepressants like SSRIs and SNRIs, but may be less with atypical antidepressants like bupropion, agomelatine, and vortioxetine. Higher doses of antidepressants seem to be more likely to produce emotional blunting than lower doses. It can be decreased by reducing dosage, discontinuing the medication, or switching to a different antidepressant that may have less propensity for causing this side effect. Vision Acute narrow-angle glaucoma is the most common and important ocular side effect of SSRIs, and often goes misdiagnosed. Cardiac SSRIs do not appear to affect the risk of coronary heart disease (CHD) in those without a previous diagnosis of CHD. A large cohort study suggested no substantial increase in the risk of cardiac malformations attributable to SSRI usage during the first trimester of pregnancy. A number of large studies of people without known pre-existing heart disease have reported no EKG changes related to SSRI use. The recommended maximum daily dose of citalopram and escitalopram was reduced due to concerns with QT prolongation. In overdose, fluoxetine has been reported to cause sinus tachycardia, myocardial infarction, junctional rhythms, and trigeminy. Some authors have suggested electrocardiographic monitoring in patients with severe pre-existing cardiovascular disease who are taking SSRIs. In a 2023 study a possible connection between SSRI usage and the onset of mitral valve regurgitation was identified, indicating that SSRIs could hasten the progression of degenerative mitral valve regurgitation (DMR), especially in individuals carrying 5-HTTLPR genotype. The study’s authors suggest that genotyping should be performed on people with DMR to evaluate serotonin transporter (SERT) activity. They also urge practitioners to exercise caution when prescribing SSRIs to individuals with a familial history of DMR. Bleeding SSRIs directly increase the risk of abnormal bleeding by lowering platelet serotonin levels, which are essential to platelet-driven hemostasis. SSRIs interact with anticoagulants, like warfarin, and antiplatelet drugs, like aspirin. This includes an increased risk of GI bleeding, and post operative bleeding. The relative risk of intracranial bleeding is increased, but the absolute risk is very low. SSRIs are known to cause platelet dysfunction. This risk is greater in those who are also on anticoagulants, antiplatelet agents and NSAIDs (nonsteroidal anti-inflammatory drugs), as well as with the co-existence of underlying diseases such as cirrhosis of the liver or liver failure. Fracture risk Evidence from longitudinal, cross-sectional, and prospective cohort studies suggests an association between SSRI usage at therapeutic doses and a decrease in bone mineral density, as well as increased fracture risk, a relationship that appears to persist even with adjuvant bisphosphonate therapy. However, because the relationship between SSRIs and fractures is based on observational data as opposed to prospective trials, the phenomenon is not definitively causal. There also appears to be an increase in fracture-inducing falls with SSRI use, suggesting the need for increased attention to fall risk in elderly patients using the medication. The loss of bone density does not appear to occur in younger patients taking SSRIs. Bruxism SSRI and SNRI antidepressants may cause jaw pain/jaw spasm reversible syndrome (although it is not common). Buspirone appears to be successful in treating bruxism on SSRI/SNRI induced jaw clenching. Serotonin syndrome Serotonin syndrome is typically caused by the use of two or more serotonergic drugs, including SSRIs. Serotonin syndrome is a condition that can range from mild (most common) to deadly. Mild symptoms may consist of increased heart rate, fever, shivering, sweating, dilated pupils, myoclonus (intermittent jerking or twitching), as well as hyperreflexia. Concomitant use of SSRIs or SNRIs for depression with a triptan for migraine does not appear to heighten the risk of the serotonin syndrome. Taking monoamine oxidase inhibitors (MAOIs) in combination with SSRIs can be fatal, since MAOIs disrupt monoamine oxidase, an enzyme which is needed to break down serotonin and other neurotransmitters. Without monoamine oxidase, the body is unable to eliminate excess neurotransmitters, allowing them to build up to dangerous levels. The prognosis for recovery in a hospital setting is generally good if serotonin syndrome is correctly identified. Treatment consists of discontinuing any serotonergic drugs and providing supportive care to manage agitation and hyperthermia, usually with benzodiazepines. Suicide risk Children and adolescents Meta analyses of short duration randomized clinical trials have found that SSRI use is related to a higher risk of suicidal behavior in children and adolescents. For instance, a 2004 U.S. Food and Drug Administration (FDA) analysis of clinical trials on children with major depressive disorder found statistically significant increases of the risks of "possible suicidal ideation and suicidal behavior" by about 80%, and of agitation and hostility by about 130%. According to the FDA, the heightened risk of suicidality is within the first one to two months of treatment. The National Institute for Health and Care Excellence (NICE) places the excess risk in the "early stages of treatment". The European Psychiatric Association places the excess risk in the first two weeks of treatment and, based on a combination of epidemiological, prospective cohort, medical claims, and randomized clinical trial data, concludes that a protective effect dominates after this early period. A 2014 Cochrane review found that at six to nine months, suicidal ideation remained higher in children treated with antidepressants compared to those treated with psychological therapy. A recent comparison of aggression and hostility occurring during treatment with fluoxetine to placebo in children and adolescents found that no significant difference between the fluoxetine group and a placebo group. There is also evidence that higher rates of SSRI prescriptions are associated with lower rates of suicide in children, though since the evidence is correlational, the true nature of the relationship is unclear. In 2004, the Medicines and Healthcare products Regulatory Agency (MHRA) in the United Kingdom judged fluoxetine (Prozac) to be the only antidepressant that offered a favorable risk-benefit ratio in children with depression, though it was also associated with a slight increase in the risk of self-harm and suicidal ideation. Only two SSRIs are licensed for use with children in the UK, sertraline (Zoloft) and fluvoxamine (Luvox), for the treatment of obsessive–compulsive disorder. Fluoxetine is not licensed for this use. Adults It is unclear whether SSRIs affect the risk of suicidal behavior in adults. A 2005 meta-analysis of drug company data found no evidence that SSRIs increased the risk of suicide; however, important protective or hazardous effects could not be excluded. A 2005 review observed that suicide attempts are increased in those who use SSRIs as compared to placebo and compared to therapeutic interventions other than tricyclic antidepressants. No difference risk of suicide attempts was detected between SSRIs versus tricyclic antidepressants. A 2006 review suggests that the widespread use of antidepressants in the new "SSRI-era" appears to have led to a highly significant decline in suicide rates in most countries with traditionally high baseline suicide rates. The decline is particularly striking for women who, compared with men, seek more help for depression. Recent clinical data on large samples in the US too have revealed a protective effect of antidepressant against suicide. A 2006 meta-analysis of randomized controlled trials suggests that SSRIs increase suicide ideation compared with placebo. However, the observational studies suggest that SSRIs did not increase suicide risk more than older antidepressants. The researchers stated that if SSRIs increase suicide risk in some patients, the number of additional deaths is very small because ecological studies have generally found that suicide mortality has declined (or at least not increased) as SSRI use has increased. An additional meta-analysis by the FDA in 2006 found an age-related effect of SSRI's. Among adults younger than 25 years, results indicated that there was a higher risk for suicidal behavior. For adults between 25 and 64, the effect appears neutral on suicidal behavior but possibly protective for suicidal behavior for adults between the ages of 25 and 64. For adults older than 64, SSRI's seem to reduce the risk of both suicidal behavior. In 2016 a study criticized the effects of the FDA Black Box suicide warning inclusion in the prescription. The authors discussed the suicide rates might increase also as a consequence of the warning. Risk of death A 2017 meta-analysis found that antidepressants including SSRIs were associated with significantly increased risk of death (+33%) and new cardiovascular complications (+14%) in the general population. Conversely, risks were not greater in people with existing cardiovascular disease. Pregnancy and breastfeeding SSRI use in pregnancy has been associated with a variety of risks with varying degrees of proof of causation. As depression is independently associated with negative pregnancy outcomes, determining the extent to which observed associations between antidepressant use and specific adverse outcomes reflects a causative relationship has been difficult in some cases. In other cases, the attribution of adverse outcomes to antidepressant exposure seems fairly clear. SSRI use in pregnancy is associated with an increased risk of spontaneous abortion of about 1.7-fold. Use is also associated with preterm birth. According to some researches, decreased body weight of the child, intrauterine growth retardation, neonatal adaptive syndrome, and persistent pulmonary hypertension also was noted. A systematic review of the risk of major birth defects in antidepressant-exposed pregnancies found a small increase (3% to 24%) in the risk of major malformations and a risk of cardiovascular birth defects that did not differ from non-exposed pregnancies. Other studies have found an increased risk of cardiovascular birth defects among depressed mothers not undergoing SSRI treatment, suggesting the possibility of ascertainment bias, e.g. that worried mothers may pursue more aggressive testing of their infants. Another study found no increase in cardiovascular birth defects and a 27% increased risk of major malformations in SSRI exposed pregnancies. The FDA issued a statement on July 19, 2006, stating nursing mothers on SSRIs must discuss treatment with their physicians. However, the medical literature on the safety of SSRIs has determined that some SSRIs like Sertraline and Paroxetine are considered safe for breastfeeding. Neonatal abstinence syndrome Several studies have documented neonatal abstinence syndrome, a syndrome of neurological, gastrointestinal, autonomic, endocrine and/or respiratory symptoms among a large minority of infants with intrauterine exposure. These syndromes are short-lived, but insufficient long-term data is available to determine whether there are long-term effects. Persistent pulmonary hypertension Persistent pulmonary hypertension (PPHN) is a serious and life-threatening, but very rare, lung condition that occurs soon after birth of the newborn. Newborn babies with PPHN have high pressure in their lung blood vessels and are not able to get enough oxygen into their bloodstream. About 1 to 2 babies per 1000 babies born in the U.S. develop PPHN shortly after birth, and often they need intensive medical care. It is associated with about a 25% risk of significant long-term neurological deficits. A 2014 meta analysis found no increased risk of persistent pulmonary hypertension associated with exposure to SSRI's in early pregnancy and a slight increase in risk associates with exposure late in pregnancy; "an estimated 286 to 351 women would need to be treated with an SSRI in late pregnancy to result in an average of one additional case of persistent pulmonary hypertension of the newborn". A review published in 2012 reached conclusions very similar to those of the 2014 study. Neuropsychiatric effects in offspring According to a 2015 review available data found that "some signal exists suggesting that antenatal exposure to SSRIs may increase the risk of ASDs (autism spectrum disorders)" even though a large cohort study published in 2013 and a cohort study using data from Finland's national register between the years 1996 and 2010 and published in 2016 found no significant association between SSRI use and autism in offspring. The 2016 Finland study also found no association with ADHD, but did find an association with increased rates of depression diagnoses in early adolescence. Bipolar switch In adults and children with bipolar disorder, SSRIs may cause a bipolar switch from depression into hypomania/mania, mixed states or rapid cycling. When taken with mood stabilizers, the risk of switching is not increased, however when taking SSRIs as a monotherapy, the risk of switching may be twice or three times that of the average. The changes are not often easy to detect and require monitoring by family and mental health professionals. This switch might happen even with no prior (hypo)manic episodes and might therefore not be foreseen by the psychiatrist. SSRIs are less likely to cause switching compared to older tricyclic antidepressants. Interactions The following drugs may precipitate serotonin syndrome in people on SSRIs: Linezolid Monoamine oxidase inhibitors (MAOIs) including moclobemide, phenelzine, tranylcypromine, selegiline and methylene blue Lithium Sibutramine MDMA (ecstasy) Dextromethorphan Tramadol 5-HTP Pethidine/meperidine St. John's wort Yohimbe Tricyclic antidepressants (TCAs) Serotonin-norepinephrine reuptake inhibitors (SNRIs) Buspirone Triptan Mirtazapine Methylene blue Painkillers of the NSAIDs drug family may interfere and reduce efficiency of SSRIs and may compound the increased risk of gastrointestinal bleeds caused by SSRI use. NSAIDs include: Aspirin Ibuprofen (Advil, Nurofen) Naproxen (Aleve) There are a number of potential pharmacokinetic interactions between the various individual SSRIs and other medications. Most of these arise from the fact that every SSRI has the ability to inhibit certain P450 cytochrome enzymes. Legend: 0no inhibition +mild/weak inhibition ++moderate inhibition +++strong/potent inhibition The CYP2D6 enzyme is entirely responsible for the metabolism of hydrocodone, codeine and dihydrocodeine to their active metabolites (hydromorphone, morphine, and dihydromorphine, respectively), which in turn undergo phase 2 glucuronidation. These opioids (and to a lesser extent oxycodone, tramadol, and methadone) have interaction potential with selective serotonin reuptake inhibitors. The concomitant use of some SSRIs (paroxetine and fluoxetine) with codeine may decrease the plasma concentration of active metabolite morphine, which may result in reduced analgesic efficacy. Another important interaction of certain SSRIs involves paroxetine, a potent inhibitor of CYP2D6, and tamoxifen, an agent used commonly in the treatment and prevention of breast cancer. Tamoxifen is a prodrug that is metabolised by the hepatic cytochrome P450 enzyme system, especially CYP2D6, to its active metabolites. Concomitant use of paroxetine and tamoxifen in women with breast cancer is associated with a higher risk of death, as much as a 91 percent in women who used it the longest. Overdose SSRIs appear safer in overdose when compared with traditional antidepressants, such as the tricyclic antidepressants. This relative safety is supported both by case series and studies of deaths per numbers of prescriptions. However, case reports of SSRI poisoning have indicated that severe toxicity can occur and deaths have been reported following massive single ingestions, although this is exceedingly uncommon when compared to the tricyclic antidepressants. Because of the wide therapeutic index of the SSRIs, most patients will have mild or no symptoms following moderate overdoses. The most commonly reported severe effect following SSRI overdose is serotonin syndrome; serotonin toxicity is usually associated with very high overdoses or multiple drug ingestion. Other reported significant effects include coma, seizures, and cardiac toxicity. Poisoning is also known in animals, and some toxicity information is available for veterinary treatment. Discontinuation syndrome Serotonin reuptake inhibitors should not be abruptly discontinued after extended therapy, and whenever possible, should be tapered over several weeks to minimize discontinuation-related symptoms which may include nausea, headache, dizziness, chills, body aches, paresthesias, insomnia, and brain zaps. Paroxetine may produce discontinuation-related symptoms at a greater rate than other SSRIs, though qualitatively similar effects have been reported for all SSRIs. Discontinuation effects appear to be less for fluoxetine, perhaps owing to its long half-life and the natural tapering effect associated with its slow clearance from the body. One strategy for minimizing SSRI discontinuation symptoms is to switch the patient to fluoxetine and then taper and discontinue the fluoxetine. Mechanism of action Serotonin reuptake inhibition In the brain, messages are passed from a nerve cell to another via a chemical synapse, a small gap between the cells. The presynaptic cell that sends the information releases neurotransmitters including serotonin into that gap. The neurotransmitters are then recognized by receptors on the surface of the recipient postsynaptic cell, which upon this stimulation, in turn, relays the signal. About 10% of the neurotransmitters are lost in this process; the other 90% are released from the receptors and taken up again by monoamine transporters into the sending presynaptic cell, a process called reuptake. SSRIs inhibit the reuptake of serotonin. As a result, the serotonin stays in the synaptic gap longer than it normally would, and may repeatedly stimulate the receptors of the recipient cell. In the short run, this leads to an increase in signaling across synapses in which serotonin serves as the primary neurotransmitter. On chronic dosing, the increased occupancy of post-synaptic serotonin receptors signals the pre-synaptic neuron to synthesize and release less serotonin. Serotonin levels within the synapse drop, then rise again, ultimately leading to downregulation of post-synaptic serotonin receptors. Other, indirect effects may include increased norepinephrine output, increased neuronal cyclic AMP levels, and increased levels of regulatory factors such as BDNF and CREB. Owing to the lack of a widely accepted comprehensive theory of the biology of mood disorders, there is no widely accepted theory of how these changes lead to the mood-elevating and anti-anxiety effects of SSRIs. Their effects on serotonin blood levels, which take weeks to take effect, appear to be largely responsible for their slow-to-appear psychiatric effects. SSRIs mediate their action largely with high occupancy in a total of all serotonin transporters within the brain and through this slow downstream changes of large brain regions at therapeutic concentrations, whereas MDMA leads to an excess serotonin release in a short run. This could explain the absence of a "high" by antidepressants and in addition the contrary ability of SSRIs in expressing neuroprotective actions to the neurotoxic abilities of MDMA. Sigma receptor ligands In addition to their actions as reuptake inhibitors of serotonin, some SSRIs are also, coincidentally, ligands of the sigma receptors. Fluvoxamine is an agonist of the σ1 receptor, while sertraline is an antagonist of the σ1 receptor, and paroxetine does not significantly interact with the σ1 receptor. None of the SSRIs have significant affinity for the σ2 receptor. Fluvoxamine has by far the strongest activity of the SSRIs at the σ1 receptor. High occupancy of the σ1 receptor by clinical dosages of fluvoxamine has been observed in the human brain in positron emission tomography (PET) research. It is thought that agonism of the σ1 receptor by fluvoxamine may have beneficial effects on cognition. In contrast to fluvoxamine, the relevance of the σ1 receptor in the actions of the other SSRIs is uncertain and questionable due to their very low affinity for the receptor relative to the SERT. Anti-inflammatory effects The role of inflammation and the immune system in depression has been extensively studied. The evidence supporting this link has been shown in numerous studies over the past ten years. Nationwide studies and meta-analyses of smaller cohort studies have uncovered a correlation between pre-existing inflammatory conditions such as type 1 diabetes, rheumatoid arthritis (RA), or hepatitis, and an increased risk of depression. Data also shows that using pro-inflammatory agents in the treatment of diseases like melanoma can lead to depression. Several meta-analytical studies have found increased levels of proinflammatory cytokines and chemokines in depressed patients. This link has led scientists to investigate the effects of antidepressants on the immune system. SSRIs were originally invented with the goal of increasing levels of available serotonin in the extracellular spaces. However, the delayed response between when patients first begin SSRI treatment to when they see effects has led scientists to believe that other molecules are involved in the efficacy of these drugs. To investigate the apparent anti-inflammatory effects of SSRIs, both Kohler et al. and Więdłocha et al. conducted meta-analyses which have shown that after antidepressant treatment the levels of cytokines associated with inflammation are decreased. A large cohort study conducted by researchers in the Netherlands investigated the association between depressive disorders, symptoms, and antidepressants with inflammation. The study showed decreased levels of interleukin (IL)-6, a cytokine that has proinflammatory effects, in patients taking SSRIs compared to non-medicated patients. Treatment with SSRIs has shown reduced production of inflammatory cytokines such as IL-1β, tumor necrosis factor (TNF)-α, IL-6, and interferon (IFN)-γ, which leads to a decrease in inflammation levels and subsequently a decrease in the activation level of the immune response. These inflammatory cytokines have been shown to activate microglia which are specialized macrophages that reside in the brain. Macrophages are a subset of immune cells responsible for host defense in the innate immune system. Macrophages can release cytokines and other chemicals to cause an inflammatory response. Peripheral inflammation can induce an inflammatory response in microglia and can cause neuroinflammation. SSRIs inhibit proinflammatory cytokine production which leads to less activation of microglia and peripheral macrophages. SSRIs not only inhibit the production of these proinflammatory cytokines, they also have been shown to upregulate anti-inflammatory cytokines such as IL-10. Taken together, this reduces the overall inflammatory immune response. In addition to affecting cytokine production, there is evidence that treatment with SSRIs has effects on the proliferation and viability of immune system cells involved in both innate and adaptive immunity. Evidence shows that SSRIs can inhibit proliferation in T-cells, which are important cells for adaptive immunity and can induce inflammation. SSRIs can also induce apoptosis, programmed cell death, in T-cells. The full mechanism of action for the anti-inflammatory effects of SSRIs is not fully known. However, there is evidence for various pathways to have a hand in the mechanism. One such possible mechanism is the increased levels of cyclic adenosine monophosphate (cAMP) as a result of interference with activation of protein kinase A (PKA), a cAMP dependent protein. Other possible pathways include interference with calcium ion channels, or inducing cell death pathways like MAPK and Notch signaling pathway. The anti-inflammatory effects of SSRIs have prompted studies of the efficacy of SSRIs in the treatment of autoimmune diseases such as multiple sclerosis, RA, inflammatory bowel diseases, and septic shock. These studies have been performed in animal models but have shown consistent immune regulatory effects. Fluoxetine, an SSRI, has also shown efficacy in animal models of graft vs. host disease. SSRIs have also been used successfully as pain relievers in patients undergoing oncology treatment. The effectiveness of this has been hypothesized to be at least in part due to the anti-inflammatory effects of SSRIs. Pharmacogenetics Large bodies of research are devoted to using genetic markers to predict whether patients will respond to SSRIs or have side effects that will cause their discontinuation, although these tests are not yet ready for widespread clinical use. Versus TCAs SSRIs are described as 'selective' because they affect only the reuptake pumps responsible for serotonin, as opposed to earlier antidepressants, which affect other monoamine neurotransmitters as well, and as a result, SSRIs have fewer side effects. There appears to be no significant difference in effectiveness between SSRIs and tricyclic antidepressants, which were the most commonly used class of antidepressants before the development of SSRIs. However, SSRIs have the important advantage that their toxic dose is high, and, therefore, they are much more difficult to use as a means to commit suicide. Further, they have fewer and milder side effects. Tricyclic antidepressants also have a higher risk of serious cardiovascular side effects, which SSRIs lack. SSRIs act on signal pathways such as cyclic adenosine monophosphate (cAMP) on the postsynaptic neuronal cell, which leads to the release of brain-derived neurotrophic factor (BDNF). BDNF enhances the growth and survival of cortical neurons and synapses. Pharmacokinetics SSRIs vary in their pharmacokinetic properties. List of SSRIs Marketed Antidepressants Citalopram (Celexa) Escitalopram (Lexapro) Fluoxetine (Prozac) Fluvoxamine (Luvox) Paroxetine (Paxil) Sertraline (Zoloft) Others Dapoxetine (Priligy) Discontinued Antidepressants Indalpine (Upstène) Zimelidine (Zelmid) Never marketed Antidepressants Alaproclate (GEA-654) Centpropazine Cericlamine (JO-1017) Femoxetine (Malexil; FG-4963) Ifoxetine (CGP-15210) Omiloxetine Panuramine (WY-26002) Pirandamine (AY-23713) Seproxetine ((S)-norfluoxetine) Related drugs Although described as SNRIs, duloxetine (Cymbalta), venlafaxine (Effexor), and desvenlafaxine (Pristiq) are in fact relatively selective as serotonin reuptake inhibitors (SRIs). They are about at least 10-fold selective for inhibition of serotonin reuptake over norepinephrine reuptake. The selectivity ratios are approximately 1:30 for venlafaxine, 1:10 for duloxetine, and 1:14 for desvenlafaxine. At low doses, these SNRIs act mostly as SSRIs; only at higher doses do they also prominently inhibit norepinephrine reuptake. Milnacipran (Ixel, Savella) and its stereoisomer levomilnacipran (Fetzima) are the only widely marketed SNRIs that inhibit serotonin and norepinephrine to similar degrees, both with ratios close to 1:1. Vilazodone (Viibryd) and vortioxetine (Trintellix) are SRIs that also act as modulators of serotonin receptors and are described as serotonin modulators and stimulators (SMS). Vilazodone is a 5-HT1A receptor partial agonist while vortioxetine is a 5-HT1A receptor agonist and 5-HT3 and 5-HT7 receptor antagonist. Litoxetine (SL 81–0385) and lubazodone (YM-992, YM-35995) are similar drugs that were never marketed. They are SRIs and litoxetine is also a 5-HT3 receptor antagonist while lubazodone is also a 5-HT2A receptor antagonist. History Zimelidine was introduced in 1982 and was the first SSRI to be sold. Despite its efficacy, statistically significant increase in cases of Guillain–Barré syndrome among treated patients led to its withdrawal in 1983. Fluoxetine, introduced in 1987 is commonly thought the first SSRI to be marketed. Controversy A study examining publication of results from FDA-evaluated antidepressants concluded that those with favorable results were much more likely to be published than those with negative results. Furthermore, an investigation of 185 meta-analyses on antidepressants found that 79% of them had authors affiliated in some way to pharmaceutical companies and that they were reluctant to report caveats for antidepressants. David Healy has argued that warning signs were available for many years prior to regulatory authorities moving to put warnings on antidepressant labels that they might cause suicidal thoughts. At the time these warnings were added, others argued that the evidence for harm remained unpersuasive and others continued to do so after the warnings were added. In other organisms SSRIs are common environmental contaminant findings near human settlement. Veterinary use An SSRI (fluoxetine) has been approved for veterinary use in treatment of canine separation anxiety.
Biology and health sciences
Psychiatric drugs
Health
32039834
https://en.wikipedia.org/wiki/Dental%20avulsion
Dental avulsion
Dental avulsion is the complete displacement of a tooth from its socket in alveolar bone owing to trauma, such as can be caused by a fall, road traffic accident, assault, sports, or occupational injury. Typically, a tooth is held in place by the periodontal ligament, which becomes torn when the tooth is knocked out. Avulsions of primary teeth are more common in young children as they learn to move independently (walk and run) and also from child abuse. Avulsed deciduous (primary) teeth should not be replanted. Deciduous teeth are not replanted because of the risk of damaging the developing permanent tooth germ. Pulp necrosis with draining fistula, crown discoloration and external root resorption are reported consequences of primary tooth replantation. Tooth dilaceration, impaction and deviation from proper eruption path have been reported to have occurred in permanent teeth as a result of reimplantation of primary teeth. Avulsed permanent teeth however may be replanted, i.e., returned to the socket. Immediate replantation is considered ideal, but this may not be possible if the patient suffered other serious injuries. If properly preserved, teeth may be replanted up to one hour after avulsion. The success of delayed replantation depends on the survival of the cells remaining on the root surface. Storage in an environment similar to the tooth socket can protect these cells until replantation can be attempted. Prevention Contact sports carry a significant risk of dental injury, which can be reduced by wearing a mouthguard or helmet. Mouthguards are often less effective if not fitted properly to the teeth. Despite their wide availability, the use of mouthguards is relatively uncommon. Many people do not use them even in situations that carry a high risk of dental injury, or when their use is mandated. In addition, mouthguards may be dislodged from the wearer's mouth, leaving the teeth unprotected. Certain occlusal characteristics, such as class II malocclusions with increased overjet, are associated with a higher risk of dental trauma. These conditions can be corrected by an orthodontist reducing risk of injury due to sports related activities. Risk factors Post-normal occlusion An over-jet exceeding 4 mm Short upper lip Incompetent lips Mouth breathing Management Dental avulsion is a true dental emergency in which prompt management affects the prognosis of the tooth. Replantation of the tooth within 15 minutes is associated with the best prognosis as periodontal ligament (PDL) cells are still viable.  Total extra-oral dry time of more than 60 minutes, regardless of storage media, has poor prognosis. The avulsed permanent tooth should be gently but well rinsed with saline, with care taken not to damage the surface of the root which may have living periodontal fiber and cells. Once the tooth and mouth are clean an attempt can be made to re-plant the tooth in its original socket within the alveolar bone and be splinted (stabilized) by a dentist for several weeks. Failure to re-plant the avulsed tooth within the first 40 minutes after the injury may result in a less favorable prognosis for the tooth. If the tooth cannot be immediately replaced in its socket, follow the directions for any knocked-out (avulsed) teeth kit, or place it in cold milk or saliva and take it to an emergency room or a dentist. If the mouth is sore or injured, cleansing of the wound may be necessary, along with stitches, local anesthesia, and an update of tetanus immunization if the mouth was contaminated with soil. Management of injured primary teeth differs from management of permanent teeth; avulsed primary tooth should not be re-planted (to avoid damage to the permanent dental crypt). Although dentists advise that the best treatment for an avulsed tooth is immediate replantation, for a variety of reasons this can be difficult for the layperson. The teeth are often covered with debris. This debris must be washed off with a physiological solution and not scrubbed. Often multiple teeth are knocked-out and the person will not know to which tooth socket an individual tooth belongs to. The injured victim may have other more serious injuries that require more immediate attention or injuries such as a severely lacerated bleeding lip or gum that prevent easy visualization of the socket. Pain may be severe, and the person may resist replantation of the teeth. People may, in light of infectious diseases (e.g., HIV), fear handling the teeth or touching the blood associated with them. If immediate replantation is not possible, the teeth should be placed in an appropriate storage solution and brought to a dentist who can then replant them. The dentist will clean the socket, wash the teeth if necessary, and replant them into their sockets. S/he will splint them to other unaffected teeth for a maximum of two weeks for teeth. Properly handled, even replantation of periodontally compromised permanent teeth in older patients under good maintenance have been reported, with splinting extending for over 4 weeks due to the reduced support structure for the root due to periodontal disease. Dental pulp of the avulsed teeth should be removed within 2 weeks of replantation and the teeth should receive root canal therapy. In addition, as recommended in all cases of dental traumas, good oral hygiene with 0.12% chlorhexidine gluconate mouthwash, a soft and cold diet, and avoidance of smoking for several days may provide a favorable condition for periodontal ligaments regeneration. Initial assessment When a patient arrives at the dentist they should be seen immediately. If the tooth has not been placed in a suitable storage medium, the dentist will do this first. A thorough extra-oral and intra-oral examination should be performed. The clinician should consider the age of the patient, the history of the injury, status of tooth root apex and whether it is in line with clinical findings. It is advisable to check the patient's tetanus status. If there is concern about non-accidental injury, then child protection procedures should be followed. [5] Re-implantation Prior to the beginning of the procedure, a local anesthetic should be administered to both the palatal/lingual tissues to minimize discomfort. Gentle irrigation with a saline solution, should be performed as this removes any clots within the socket, which could prevent the proper re-positioning of the tooth into its original position. The tooth should always be handled via the enamel on the crown, not the root. Wash the root surface with saline, be careful not to scrub the root surface, as this may crush the delicate cells. Any stubborn debris can be removed by agitating it in the storage medium or by rinsing under a stream of saline.[5] Stabilize the tooth for 2 weeks using a passive and flexible wire (0.016” or 0.4 mm. Alternatively composite, nylon fishing line can be used to create a flexible splint. If associated with alveolar fracture a more rigid splint may be placed for up to 4 weeks. Systemic antibiotic therapy may be recommended. The patient should be asked to avoid contact sports, eat a soft diet, brush their teeth with a soft toothbrush after each meal, and use Chlorhexidine (0.12%) mouth rinse twice a day for 2 weeks. [5] Biologic basis for success of replantation following avulsion Every tooth is connected to its surrounding bone by the periodontal ligament. The tooth receives its nourishment through this ligament. When a tooth is knocked out, this ligament is stretched and torn. If the torn periodontal ligament can be kept alive, the tooth can be replanted, and the ligament will reattach, and the tooth can be maintained in its socket. The torn ligament that stays on the socket wall, since it remains connected to the bone and blood supply, is naturally kept alive. However, the ligament cells that remain on the tooth root lose their blood and nutrition supply and must be artificially maintained. They must be protected from two potentially destructive processes: cell crushing and loss of normal cell metabolism. All treatment between the time of the accident and the ultimate replantation must be focused on preventing these two possibilities. Prevention of cell crushing When teeth are knocked out, they end up on an artificial surface: the floor, the ground or material such as carpeting. If the surface is hard, the tooth root cells will be traumatized. Since the cells remaining on the tooth root are very delicate, additional trauma to the PDL cells must be avoided so as to avoid more cell crushing. This damage can occur while picking the tooth up and/or during transportation to the dentist. When a tooth is picked up, it should always be grasped by the enamel on the crown. Finger pressure on the tooth root cells will cause cell crushing. Any attempt to clean off any debris should be avoided. Debris should always be washed off gently with, at the very least, a physiologic saline. Even with the use of a physiologic saline, the "scrubbing" of the tooth root to remove debris must be avoided. When placed in a physiologic solution, the tooth should be gently agitated to permit the cleansing of the tooth root. At the same time that this agitation occurs, the bumping of the tooth root against a hard surface such as glass, plastic or even cardboard must also be avoided. For the same reasons, the method in which the knocked-out teeth are transported must be carefully selected. Placing the knocked-out teeth by transporting in tissues and handkerchiefs can be damaging and transporting them in glass or cardboard containers can also be potentially damaging to the cells. In addition to the potential damage that the hard surface can cause, glass containers have the added possibility of breakage or leakage of the physiologic storage fluid. If the glass container does not have a tightly fitting top, then during the transportation, the physiologic storage solution can spill out and the teeth can fall, once again, on the floor and, at the same time, be out of a physiologic environment. Maintenance of normal cell metabolism Normally metabolizing tooth root cells have an internal cell pressure (osmolality) of 280–300 mOs and a pH of 7.2. When there is an uninterrupted blood supply, all of the metabolites (calcium, phosphate, potassium) and glucose that the cells require are provided. When the tooth is knocked out, this normal blood supply is cut off and within 15 minutes most of the stored metabolites have been depleted and the cells will begin to die. Within one to two hours, enough cells will die that rejection of the tooth by the body at a later time is the usual outcome. The method by which the body rejects the replanted tooth is a process called "replacement root resorption". During this process, the tooth root cells become necrotic (dead) and will activate the immunologic mechanism of the body to attempt to remove this necrotic layer and literally eats away the tooth root. This is called "root resorption". It is a slow, but non-painful, process that is sometimes not observed by x-rays for years. Once this process starts, it is irreversible, and the tooth will eventually fall out. In growing children, this can cause bone development problems because the replacement resorption (also termed ankylosis) attaches the tooth firmly to the jawbone and stops normal tooth eruption and impedes normal jaw growth. Research has shown that the critical factor for reduction of the death of the tooth root cells and the subsequent root replacement resorption following reimplantation of knocked-out teeth is maintenance of normal cell physiology and metabolism of the cells left on the tooth root while the tooth is out of the socket. In order to maintain this normalcy, the environment in which the teeth are stored must supply the optimum internal cell pressure, cell nutrients and pH. Storage media Immediate replantation, where the tooth is quickly reinserted into its socket, is the best course of action to preserve the tooth's viability and function. However, due to various factors such as the condition of the avulsed tooth, patient circumstances, or delay in accessing dental care, immediate replantation might not always be possible. In cases where immediate replantation is not feasible, selecting an appropriate storage medium to preserve the viability of the periodontal ligament (PDL) cells becomes paramount. These cells are essential for the successful reintegration of the tooth into its socket, aiding the healing process and preventing resorption. Storage media serve the critical role of maintaining cell viability by providing an environment with suitable pH, osmolality, and nutrient content, thereby sustaining cell health until the tooth can be properly replanted. The International Association of Dental Traumatology (IADT) guidelines stress the importance of minimizing the tooth's dry time and choosing an effective storage medium to enhance replantation success. Universally considered the most preferred storage medium for avulsed teeth, milk's effectiveness is attributed to its pH level and osmolality, which closely resemble the natural conditions necessary for sustaining PDL cell viability. Milk's widespread availability, combined with its nutritional content, provides an optimal environment that supports the survival of PDL cells during the critical period before replantation. Research indicates that the type of milk (e.g., whole, skimmed, or low-fat) can play a role in the preservation efficacy, with whole milk often recommended for its balanced nutrient composition. However, any readily available milk can serve as an effective temporary storage medium, making it a practical choice in emergency situations. Hank's Balanced Salt Solution (HBSS) is a medically formulated solution containing essential nutrients designed to preserve avulsed teeth until they can be replanted. HBSS is distinguished by its balanced pH and osmolality, closely simulating the natural conditions necessary for the survival of periodontal ligament (PDL) cells. The solution has demonstrated effectiveness in maintaining PDL cell viability for up to 48 hours. Despite its effectiveness, HBSS is not as commonly available for immediate use as household items like milk, which poses a challenge in emergency dental care situations. However, it remains highly recommended in dental trauma care, especially in commercial preparations tailored for dental emergencies. These preparations are specifically designed to replenish lost metabolites, providing an optimal environment for the temporary storage of avulsed teeth and significantly enhancing the prospect of successful replantation. Recent evidence suggests oral rehydration solutions, propolis, rice water, and even cling film might also be beneficial for preserving cell viability, though further validation is needed. Saline solution and pure water are discouraged due to their lack of essential nutrients and hypotonic nature, respectively, which can lead to decreased viability of PDL cells. Other alternatives like coconut water, egg white, and various probiotic solutions have shown mixed effectiveness. However, ongoing research continues to explore the viability of other natural and synthetic substances as potential storage media. The exploration into these alternatives aims to identify solutions that might offer practical benefits similar to or better than those provided by milk, especially in scenarios where milk may not be immediately available. Prognosis Despite the treatment provided, dental avulsion carries one of the poorest outcomes with 73–96% of the replanted teeth eventually being lost. There are three main factors which significantly influence the prognosis of the tooth. These include: The extent of damage to the periodontal ligament (PDL) at the time of injury The storage conditions of the avulsed tooth The duration of time the tooth was out of its socket prior to replantation Additionally, the choice of treatment is closely related to the maturity of the root (open or closed apex) and the condition of the PDL cells, which is dependent on the time out of the mouth and the storage medium used. Minimizing the dry time is crucial for the survival of the PDL cells, with viability sharply declining after an extra-alveolar dry time of 30 minutes. From a clinical perspective, assessing the condition of the PDL cells is vital, classifying the avulsed tooth into one of three groups before treatment. These include: PDL cells are most likely viable, replanted immediately or within a short time; PDL cells may be viable but compromised, stored in a medium like milk or HBSS with a dry time of less than 60 minutes; PDL cells are likely non-viable, with a dry time of more than 60 minutes, regardless of storage medium. This classification guides dentists in prognosis and treatment decisions, though exceptions occur. PDL healing is the primary outcome measure when assessing interventions for tooth avulsion. When the healing of the PDL is unfavorable it means that there is no longer protection for the root from the surrounding alveolar bone. The bone that surrounds the tooth is continually undergoing physiological remodeling. Over time, the root is gradually replaced by bone, which leads to tooth loss. The results of replanting permanent incisor teeth can be divided into short, medium and long-term survival of the tooth. If the tooth is replanted it acts in the short term to maintain space, maintain bone and provide good to excellent aesthetics. If unfavorable healing has occurred, the tooth can last into the medium term for 2-10+ years depending on the speed of bone turnover. Long-term survival of the tooth only happens when favorable healing of the periodontal ligament has occurred. If this happens the tooth can be estimated to survive as long as any other tooth Epidemiology Research has shown that more than five million teeth are knocked-out each year in the United States. Dental avulsion is a type of dental trauma, and the prevalence of dental trauma is estimated at 17.5% and varies with geographical area. Although dental trauma is relatively low, dental avulsion is the fourth most prevalent type of dental trauma. Dental avulsion is more prevalent in males than females. Males are three times more likely to suffer from dental avulsion than females. Up to 25% of school-aged children and military trainees experience some kind of dental trauma each year. The occurrence of dental avulsion in school aged children ranges from 0.5 to 16% of all dental trauma. Many of these teeth are knocked-out during school activities or sporting events such as contact sports, football, basketball, and hockey. It is important for laypersons who are related to children, working, or witnessing sports that they be educated on this subject matter. Being educated could aid in minimizing injuries that could do further harm to the victim. Being informed and spreading awareness of dental avulsion, its treatment, and prevention could make an impact. History The first reported cases of knocked-out teeth being replanted was by Pare in 1593. In 1706, Pierre Fauchard also reported replanting knocked out teeth. Wigoper in 1933 used a cast gold splint to hold reimplanted teeth in place. In 1959, Lenstrup and Skieller declared that the success rate of replanted knocked out teeth should be considered a temporary procedure because the success rate of less than 10% was so poor. In 1966 in a retrospective study, Andreasen theorized that 90% of avulsed teeth could be successfully retained if they were replanted within the first 30 minutes of the accident. In 1974, Cvek showed that removal of the dental pulp following reimplantation was necessary to prevent resorption of the tooth root. In 1974, Cvek showed that storage of knocked out teeth in saline could improve the success of replanted teeth. In 1977, Lindskog et al. showed that the key to retention of the knocked-out teeth was to maintain the vitality of the periodontal ligament. In 1980, Blomlof showed that storing the periodontal ligament cells in a biocompatible medium could extend the extra oral time to four hours or more. He found that the best storage medium was a medical research fluid called Hank's Balanced Solution. In this study, it was serendipitously discovered that milk could also maintain cell viability for two hours. In 1981, Andreasen showed that crushing of cells on the tooth root could cause death of the cells and lead to resorption and reduction in prognosis. In 1983, Matsson et al. showed that soaking in Hank's Balanced Solution for thirty minutes prior to reimplantation could revitalize extracted dog's teeth that were dry for 60 minutes. In 1989, a systematic storage device was developed to optimally store and preserve knocked out teeth. In 1992, Trope et al. showed that extracted dog's teeth could be stored in Hank's Balanced Solution for up to 96 hours and still maintain significant vitality. In this study, milk was only able to maintain vitality for two hours. Archaeology In ancient times, ritual dental avulsion was widespread among different cultures around the world. For example, it was common during the Early Holocene (from around 11,500 BP up to 5,000 BP) in North Africa and was occasionally observed in the Natufian culture (14,000 to 11,500 BP). Such tooth avulsion was the intentional removal of one or more teeth, which was done for ritual or aesthetic reasons. It was also used to denote group affiliation. Typically, the maxillary incisors were the teeth most often selected for removal. This practice is still common in parts of Africa.
Biology and health sciences
Types
Health
24997037
https://en.wikipedia.org/wiki/Cannabaceae
Cannabaceae
Cannabaceae is a small family of flowering plants, known as the hemp family. As now circumscribed, the family includes about 170 species grouped in about 11 genera, including Cannabis (hemp), Humulus (hops) and Celtis (hackberries). Celtis is by far the largest genus, containing about 100 species. Cannabaceae is a member of the Rosales. Members of the family are erect or climbing plants with petalless flowers and dry, one-seeded fruits. Hemp (Cannabis) and hop (Humulus) are the most economically important species. Other than a shared evolutionary origin, members of the family have few common characteristics; some are trees (e.g. Celtis), others are herbaceous plants (e.g. Cannabis). Description Members of this family can be trees (e.g. Celtis), erect herbs (e.g. Cannabis), or twining herbs (e.g. Humulus). Leaves are often more or less palmately lobed or palmately compound and always bear stipules. Cystoliths are always present and some members of this family possess laticifers. Cannabaceae are often dioecious (distinct male and female plants). The flowers are actinomorphic (radially symmetrical) and not showy, as these plants are pollinated by the wind. As an adaptation to this kind of pollination, the calyx and corolla are radically reduced to only vestigial remnants found as an adherent perianth coating the seed. A reduced and monophyllous cuplike perigonal bract, properly known as the bracteole, immediately surrounds and protects the seed and is often misnamed as a "calyx". Flowers are grouped to form cymes. In the dioecious plants the male inflorescences are long and look like panicles, while the female ones are shorter and bear fewer flowers. The pistil is made of two connate carpels, the usually superior ovary is unilocular; there is no fixed number of stamens. The fruit can be an achene or a drupe. Taxonomy Classification Classification systems developed prior to the 1990s, such as those of Cronquist (1981) and Dahlgren (1989), typically recognized the order Urticales, which included the families Cannabaceae, Cecropiaceae, Celtidaceae, Moraceae, Ulmaceae and Urticaceae, as then circumscribed. Molecular data from 1990s onwards showed that these families were actually embedded within the order Rosales, so that from the first classification by the Angiosperm Phylogeny Group in 1998, they were placed in an expanded Rosales, forming a group which has been called 'urticalean rosids'. Cannabaceae comprises the following genera: Aphananthe Planch. (5 spp.) Cannabis —Hemp (3 spp.) Celtis L. (73–109 spp.) Chaetachme Planch. (1 sp.) Gironniera Gaudich. (6 spp.) Humulus L.—Hop (3 spp.) Lozanella Greenm. (2 spp.) Parasponia Miq. (5–10 spp.) Pteroceltis Maxim. (1 sp.) Trema Lour. (12–42 spp.) Phylogeny Cannabaceae likely originated in East Asia during the Late Cretaceous. The oldest known pollen typical of members of Cannabaceae is from the Late Cretaceous (Turonian ~94–90 million years ago) of Sarawak, Borneo. Fossils show Cannabaceae were widely distributed in the Northern Hemisphere during the early Cenozoic, though their distribution shifted towards tropical regions in the later Cenozoic due to changing climates. Modern molecular phylogenetics suggest the following relationships: Uses Carbon dating has revealed that these plants may have been used for ritual/medicinal purposes in Xinjiang, China as early as 494 B.C. Humulus lupulus, the common hop, has been the predominant bittering agent of beer for hundreds of years. The flowers' resins are responsible for beer's bitterness and their ability to extend shelf life due to some antimicrobial qualities. The young shoots can be used as a vegetable. Some plants in the genus Cannabis are cultivated as hemp for the production of fiber, as a source of cheap oil, for their nutritious seeds, or their edible leaves. Others are cultivated for medical or recreational use as dried flowers, extracts, or infused food products. Induced parthenocarpy in pistilate flowers, and selective breeding are used to produce either higher or lower yields of tetrahydrocannabinol (THC), other cannabinoids, as well as terpenes with desired flavors or aromas, such as blueberry, strawberry, or even citrus. Many trees in the genus Celtis are grown for landscaping and ornamental purposes, and the bark of Pteroceltis is used to produce high-end Chinese rice paper.
Biology and health sciences
Rosales
Plants
24998247
https://en.wikipedia.org/wiki/Vitamin%20D
Vitamin D
Vitamin D is a group of structurally related, fat-soluble compounds responsible for increasing intestinal absorption of calcium, magnesium, and phosphate, along with numerous other biological functions. In humans, the most important compounds within this group are vitamin D3 (cholecalciferol) and vitamin D2 (ergocalciferol). Unlike the other twelve vitamins, vitamin D is only conditionally essential - in a preindustrial society people had adequate exposure to sunlight and the vitamin was a hormone, as the primary natural source of vitamin D was the synthesis of cholecalciferol in the lower layers of the skin’s epidermis, triggered by a photochemical reaction with ultraviolet B (UVB) radiation from sunlight. Cholecalciferol and ergocalciferol can also be obtained through diet and dietary supplements. Foods such as the flesh of fatty fish are good natural sources of vitamin D; there are few other foods where it naturally appears in significant amounts. In the U.S. and other countries, cow's milk and plant-based milk substitutes are fortified with vitamin D3, as are many breakfast cereals. Government dietary recommendations typically assume that all of a person's vitamin D is taken by mouth, given the potential for insufficient sunlight exposure due to urban living, cultural choices for amount of clothing worn when outdoors, and use of sunscreen because of concerns about safe levels of sunlight exposure, including risk of skin cancer. The reality is that for most people, skin synthesis contributes more than diet sources. Cholecalciferol is converted in the liver to calcifediol (also known as calcidiol or 25-hydroxycholecalciferol), while ergocalciferol is converted to ercalcidiol (25-hydroxyergocalciferol). These two vitamin D metabolites, collectively referred to as 25-hydroxyvitamin D or 25(OH)D, are measured in serum to assess a person's vitamin D status. Calcifediol is further hydroxylated by the kidneys and certain immune cells to form calcitriol (1,25-dihydroxycholecalciferol; 1,25(OH)2D), the biologically active form of vitamin D. Calcitriol attaches to vitamin D receptors, which are nuclear receptors found in various tissues throughout the body. The discovery of the vitamin in 1922 was due to effort to identify the dietary deficiency in children with rickets. Adolf Windaus received the Nobel Prize in Chemistry in 1928 for his work on the constitution of sterols and their connection with vitamins.” Present day, government food fortification programs in some countries and recommendations to consume vitamin D supplements are intended to prevent or treat vitamin D deficiency rickets and osteomalacia. There are many other health conditions linked to vitamin D deficiency. However, the evidence for health benefits of vitamin D supplementation in individuals who are already vitamin D sufficient is unproven. Types Several forms (vitamers) of vitamin D exist, with the two major forms being vitamin D2 or ergocalciferol, and vitamin D3 or cholecalciferol. The common-use term "vitamin D" refers to both D2 and D3, which were chemically characterized, respectively, in 1931 and 1935. Vitamin D3 was shown to result from the ultraviolet irradiation of 7-dehydrocholesterol. Although a chemical nomenclature for vitamin D forms was recommended in 1981, alternative names remain commonly used. Chemically, the various forms of vitamin D are secosteroids, meaning that one of the bonds in the steroid rings is broken. The structural difference between vitamin D2 and vitamin D3 lies in the side chain: vitamin D2 has a double bond between carbons 22 and 23, and a methyl group on carbon 24. Vitamin D analogues have also been synthesized. Biology The active vitamin D metabolite, calcitriol, exerts its biological effects by binding to the vitamin D receptor (VDR), which is primarily located in the nuclei of target cells. When calcitriol binds to the VDR, it enables the receptor to act as a transcription factor, modulating the gene expression of transport proteins involved in calcium absorption in the intestine, such as TRPV6 and calbindin. The VDR is part of the nuclear receptor superfamily of steroid hormone receptors, which are hormone-dependent regulators of gene expression. These receptors are expressed in cells across most organs. VDR expression decreases with age. Activation of VDR in the intestine, bone, kidney, and parathyroid gland cells plays a crucial role in maintaining calcium and phosphorus levels in the blood, a process that is assisted by parathyroid hormone and calcitonin, thereby supporting bone health. VDR also regulates cell proliferation and differentiation. Additionally, vitamin D influences the immune system, with VDRs being expressed in several types of white blood cells, including monocytes and activated T and B cells. Deficiency Worldwide, more than one billion people - infants, children, adults and elderly - can be considered vitamin D deficient, with reported percentages dependent on what measurement is used to define "deficient." Deficiency is common in the Middle-East, Asia, Africa and South America, but also exists in North America and Europe. Dark-skinned populations in North America, Europe and Australia have a higher percentage of deficiency compared to light-skinned populations that had their origins in Europe. Serum 25(OH)D concentration is used as a biomarker for vitamin D deficiency. Units of measurement are either ng/mL or nmol/L, with oneng/mL equal to 2.5nmol/L. There is not a consensus on defining vitamin D deficiency, insufficiency, sufficiency, or optimal for all aspects of health. According to the US Institute of Medicine Dietary Reference Intake Committee, below 30nmol/L significantly increases the risk of vitamin D deficiency caused rickets in infants and young children, and reduces absorption of dietary calcium from the normal range of 60–80% to as low as 15%, whereas above 40nmol/L is needed to prevent osteomalacia bone loss in the elderly, and above 50nmol/L to be sufficient for all health needs. Other sources have defined deficiency as less than 25nmol/L, insufficiency as 30–50nmol/L and optimal as greater than 75nmol/L. Part of the controversy is because studies have reported differences in serum levels of 25(OH)D between ethnic groups, with studies pointing to genetic as well as environmental reasons behind these variations. African-American populations have lower serum 25(OH)D than their age-matched white population, but at all ages have superior calcium absorption efficiency, a higher bone mineral density and as elderly, a lower risk of osteoporosis and fractures. Supplementation in this population to achieve proposed 'standard' concentrations could in theory cause harmful vascular calcification. Using the 25(OH)D assay as a screening tool of the generally healthy population to identify and treat individuals is considered not as cost-effective as a government mandated fortification program. Instead, there is a recommendation that testing should be limited to those showing symptoms of vitamin D deficiency or who have health conditions known to cause vitamin deficiency. Causes Causes of insufficient vitamin D synthesis in the skin include insufficient exposure to UVB light from sunlight due to living in high latitudes (farther distance from the equator with resultant shorter daylight hours in winter). Serum concentration by the end of winter can be lower by one-third to half that at the end of summer. The prevalence of vitamin D deficiency increases with age due to a decrease in 7-dehydrocholesterol synthesis in the skin and a decline in kidney capacity to convert calcidiol to calcitriol, the latter seen to a greater degree in people with chronic kidney disease. Despite these age effects, elderly people can still synthesize sufficient calcitriol if enough skin is exposed to UVB light. Absent that, a dietary supplement is recommended. Other causes of insufficient synthesis are sunlight being blocked by air pollution, urban/indoor living, long-term hospitalizations and stays in extended care facilities, cultural or religious lifestyle choices that favor sun-blocking clothing, recommendations to use sun-blocking clothing or sunscreen to reduce risk of skin cancer, and lastly, the UV-B blocking nature of dark skin. Consumption of foods that naturally contain vitamin D is rarely sufficient to maintain recommended serum concentration of 25(OH)D in the absence of the contribution of skin synthesis. Fractional contributions are roughly 20% diet and 80% sunlight. Vegans had lower dietary intake of vitamin D and lower serum 25(OH)D when compared to omnivores, with lacto-ovo-vegetarians falling in between due to the vitamin content of egg yolks and fortified dairy products. Governments have mandated or voluntary food fortification programs to bridge the difference in, respectively, 15 and 10 countries. The United States is one of the few mandated countries. The original fortification practices, circa early 1930s, were limited to cow's milk, which had a large effect on reducing infant and child rickets. In July 2016 the US Food and Drug Administration approved the addition of vitamin D to plant milk beverages intended as milk alternatives, such as beverages made from soy, almond, coconut and oats. At an individual level, people may choose to consume a multi-vitamin/mineral product or else a vitamin-D-only product. There are many disease states, medical treatments and medications that put people at risk for vitamin D deficiency. Chronic diseases that increase risk include kidney and liver failure, Crohn’s disease, inflammatory bowel disease and malabsorption syndromes such as cystic fibrosis, and hyper- or hypo-parathyroidism. Obesity sequesters vitamin D in fat tissues, thereby lowering serum levels, but bariatric surgery to treat obesity interferes with dietary vitamin D absorption, also causing deficiency. Medications include antiretrovirals, anti-seizure drugs, glucocorticoids, systemic antifungals such as ketoconazole, cholestyramine and rifampicin. Organ transplant recipients receive immunosuppressive therapy that is associated with an increased risk to develop skin cancer, so they are advised to avoid sunlight exposure, and to take a vitamin D supplement. Treatment Daily dose regimens are preferred to admission of large doses at weekly or monthly schedules, and D3 may be preferred over D2, but there is a lack of consensus as to optimal type, dose, duration or what to measure to deem success. Daily regimens on the order of 4,000 IU/day (for other than infants) have a greater effect on 25(OH)D recovery from deficiency and lower risk of side effects compared to weekly or monthly bolus doses, with the latter as high as 100,000 IU. The only advantage for bolus dosing could be better compliance, as bolus dosing is usually administered by a healthcare professional rather than self-administered. While some studies have found that vitamin D3 raises 25(OH)D blood levels faster and remains active in the body longer, others contend that vitamin D2 sources are equally bioavailable and effective for raising and sustaining 25(OH)D. If digestive disorders compromise absorption, then intramuscular injection of up to 100,000 IU of vitamin D3 is therapeutic. Dark skin as deficiency risk Melanin, specifically the sub-type eumelanin, is a biomolecule consisting of linked molecules of oxidized amino acid tyrosine. It is produced by cells called melanocytes in a process called melanogenesis. In skin, melanin is located in the bottom layer (the stratum basale) of the skin's epidermis. Melanin can be permanently incorporated into skin, resulting in dark skin, or else have its synthesis initiated by exposure to UV radiation, causing the skin to darken as a temporary sun tan. Eumelanin is an effective absorbent of light; the pigment is able to dissipate over 99.9% of absorbed UV radiation. Because of this property, eumelanin is thought to protect skin cells from sunlight's Ultraviolet A (UVA) and Ultraviolet B (UVB) radiation damage, reducing the risk of skin tissue folate depletion, preventing premature skin aging and reducing the risks of sunburn and skin cancer. Melanin inhibits UVB-powered vitamin D synthesis in the skin. In areas of the world not distant from the equator, abundant, year-round exposure to sunlight means that even dark-skinned populations have adequate skin synthesis. However, when dark-skinned people cover much of their bodies with clothing for cultural or climate reasons, or are living a primarily indoor life in urban conditions, or live at higher latitudes which provide less sunlight in winter, they are at risk for vitamin D deficiency. The last cause has been described as a "latitude-skin color mismatch." To use one country as an example, in the United States, vitamin D deficiency is particularly common among non-white Hispanic and African-American populations. However, despite having on-average 25(OH)D serum contentrations below the 50 nmol/L amount considered sufficient, African Americans have higher bone mineral density and lower fracture risk when compared to European-origin people. Possible mechanisms may include higher calcium retention, lower calcium excretion and greater bone resistance to parathyroid hormone, also genetically lower serum vitamin D-binding protein which would result in adequate bioavailable 25(OH)D despite total serum 25(OH)D being lower. The bone density and fracture risk paradox does not necessarily carry over to non-skeletal health conditions such as arterial calcification, cancer, diabetes or all-cause mortality. There is conflicting evidence that in the African American population, 'deficiency' as currently defined increases the risk of non-skeletal health conditions, and some evidence that supplementation increases risk, including for harmful vascular calcification. African Americans, and by extension other dark-skinned populations, may need different definitions for vitamin D deficiency, insufficiency and adequate. Infant deficiency Comparative studies carried out in lactating mothers indicate a mean value of vitamin D content in breast milk of 45 IU/liter. This vitamin D content is clearly too low to meet the vitamin D requirement of 400 IU/day recommended by several government organizations (...as breast milk is not a meaningful source of vitamin D."). The same government organizations recommend that lactating women consume 600 IU/day, but this is insufficient to raise breast milk content to deliver recommended intake. There is evidence that breast milk content can be increased, but because the transfer of the vitamin from the lactating mother's serum to milk is inefficient, this requires that she consume a dietary supplement in excess of the government-set safe upper limit of 4,000 IU/day. Given the shortfall, there are recommendations that breast-fed infants be fed a vitamin D dietary supplement of 400 IU/day during the first year of life. If not breast feeding, infant formulas are designed to deliver 400 IU/day for an infant consuming a liter of formula per day - a normal volume for a full-term infant after first month. Excess Vitamin D toxicity, or hypervitaminosis D, is the toxic state of an excess of vitamin D. It is rare, having occurred historically during a time of unregulated fortification of foods, especially those provided to infants, or in more recently, with consumption of high-dose vitamin D dietary supplements following inappropriate prescribing, non-prescribed consumption of high-dose, over-the-counter preparations, or manufacturing errors resulting in content far in excess of what is on the label. Ultraviolet light alone - sunlight or tanning beds - can raise serum 25(OH)D concentration to a bit higher than 100 nmol/L, but not to a level that causes hypervitaminosis D, the reasons being that there is a limiting about of the precursor 7-dehydrocholesterol synthesized in the skin and a negative feedback in the kidney wherein the presence of calcitriol induces diversion to metabolically inactive 24,25-hydroxyvitamin D rather than metabolically active calcitriol (1,25-hydroxyvitamin D). Further metabolsim yields calcitroic acid, an inactive water-soluble compound that is excreted in bile. There is no general agreement about the intake levels at which vitamin D may cause harm. From the IOM review, "Doses below 10,000 IU/day are not usually associated with toxicity, whereas doses equal to or above 50,000 IU/day for several weeks or months are frequently associated with toxic side effects including documented hypercalcemia." The normal range for blood concentration of 25-hydroxyvitamin D in adults is 20 to 50 nanograms per milliliter (ng/mL; equivalant to 50 to 125 nmol/L). Blood levels necessary to cause adverse effects in adults are thought to be greater than about 150 ng/mL. An excess of vitamin D causes abnormally hypercalcaemia (high blood concentrations of calcium), which can cause overcalcification of the bones and soft tissues including arteries, heart, and kidneys. Untreated, this can lead to irreversible kidney failure. Symptoms of vitamin D toxicity may include the following: increased thirst, increased urination, nausea, vomiting, diarrhea, decreased appetite, irritability, constipation, fatigue, muscle weakness, and insomnia. In 2011, the U.S. National Academy of Medicine revised tolerable upper intake levels (UL) to protect against vitamin D toxicity. Before the revision the UL for ages 9+ years was 50 μg/d (2000 IU/d). Per the revision: "UL is defined as "the highest average daily intake of a nutrient that is likely to pose no risk of adverse health effects for nearly all persons in the general population." The U.S. ULs in microgram (mcg or μg) and International Units (IU) for both males and females, by age, are: 0–6 months: 25 μg/d (1000 IU/d) 7–12 months: 38 μg/d (1500 IU/d) 1–3 years: 63 μg/d (2500 IU/d) 4–8 years: 75 μg/d (3000 IU/d) 9+ years: 100 μg/d (4000 IU/d) Pregnant and lactating: 100 μg/d (4000 IU/d) As shown in the Dietary intake section, different government organizations have set different ULs for age groups, but there is accord on the adult maximum of 100 μg/d (4000 IU/d). In contrast, some non-government authors have proposed a safe upper intake level of 250 μg (10,000 IU) per day in healthy adults. In part, this is based on the observation that endogenous skin production with full body exposure to sunlight or use of tanning beds is comparable to taking an oral dose between 250 μg and 625 μg (10,000 IU and 25,000 IU) per day and maintaining blood concentrations on the order of 100 ng/mL. Although in the U.S. the adult UL is set at 4,000 IU/day, over-the-counter products are available at 5,000, 10,000 and even 50,000 IU (the last with directions to take once a week). The percentage of the U.S. population taking over 4,000 IU/day has increased since 1999. Treatment In almost every case, stopping the vitamin D supplementation combined with a low-calcium diet and corticosteroid drugs will allow for a full recovery within a month. Special cases Idiopathic infantile hypercalcemia is caused by a mutation of the CYP24A1 gene, leading to a reduction in the degradation of vitamin D. Infants who have such a mutation have an increased sensitivity to vitamin D and in case of additional intake a risk of hypercalcaemia. The disorder can continue into adulthood. Health effects Supplementation with vitamin D is a reliable method for preventing or treating rickets. On the other hand, the effects of vitamin D supplementation on non-skeletal health are uncertain. A review did not find any effect from supplementation on the rates of non-skeletal disease, other than a tentative decrease in mortality in the elderly. Vitamin D supplements do not alter the outcomes for myocardial infarction, stroke or cerebrovascular disease, cancer, bone fractures or knee osteoarthritis. A US Institute of Medicine (IOM) report states: "Outcomes related to cancer, cardiovascular disease and hypertension, and diabetes and metabolic syndrome, falls and physical performance, immune functioning and autoimmune disorders, infections, neuropsychological functioning, and preeclampsia could not be linked reliably with intake of either calcium or vitamin D, and were often conflicting." Evidence for and against each disease state is provided in detail. Some researchers claim the IOM was too definitive in its recommendations and made a mathematical mistake when calculating the blood level of vitamin D associated with bone health. Members of the IOM panel maintain that they used a "standard procedure for dietary recommendations" and that the report is solidly based on the data. Mortality, all-causes Vitamin D3 supplementation has been tentatively found to lead to a reduced risk of death in the elderly, but the effect has not been deemed pronounced, or certain enough, to make taking supplements recommendable. Other forms (vitamin D2, alfacalcidol, and calcitriol) do not appear to have any beneficial effects concerning the risk of death. High blood levels appear to be associated with a lower risk of death, but it is unclear if supplementation can result in this benefit. Both an excess and a deficiency in vitamin D appear to cause abnormal functioning and premature aging. The relationship between serum calcifediol concentrations and all-cause mortality is "U-shaped": mortality is elevated at high and low calcifediol levels, relative to moderate levels. Harm from elevated calcifediol appears to occur at a lower level in dark-skinned Canadian and United States populations than in the light-skinned populations. Bone health Rickets Rickets, a childhood disease, is characterized by impeded growth and soft, weak, deformed long bones that bend and bow under their weight as children start to walk. Maternal vitamin D deficiency can cause fetal bone defects from before birth and impairment of bone quality after birth. Rickets typically appear between 3 and 18 months of age. This condition can be caused by vitamin D, calcium or phosphorus deficiency. Vitamin D deficiency remains the main cause of rickets among young infants in most countries because breast milk is low in vitamin D, and darker skin, social customs, and climatic conditions can contribute to inadequate sun exposure. A post-weaning Western omnivore diet characterized by high intakes of meat, fish, eggs and vitamin D fortified milk is protective, whereas low intakes of those foods and high cereal/grain intake contribute to risk. For young children with rickets, supplementation with vitamin D plus calcium was superior to the vitamin alone for bone healing. Osteomalacia and osteoporosis Characteristics of osteomalacia are softening of the bones, leading to bending of the spine, bone fragility, and increased risk for fractures. Osteomalacia is usually present when 25-hydroxyvitamin D levels are less than about 10ng/mL. Osteomalacia progress to osteoporosis, a condition of reduced bone mineral density with increased bone fragility and risk of bone fractures. Osteoporosis can be a long-term effect of calcium and/or vitamin D insufficiency, the latter contributing by reducing calcium absorption. In the absence of confirmed vitamin D deficiency there is no evidence that vitamin D supplementation without concomitant calcium slows or stops the progression of osteomalacia to osteoporosis. For older people with osteoporosis, taking vitamin D with calcium may help prevent hip fractures, but it also slightly increases the risk of stomach and kidney problems. The reduced rick for fractures is not seen in healthier, community-dwelling elderly. Low serum vitamin D levels have been associated with falls, but taking extra vitamin D does not appear to reduce that risk. Athletes who are vitamin D deficient are at an increased risk of stress fractures and/or major breaks, particularly those engaging in contact sports. Incremental decreases in risk are observed with rising serum 25(OH)D concentrations plateauing at 50ng/mL with no additional benefits seen in levels beyond this point. Cancer While serum low 25-hydroxyvitamin D status has been associated with a higher risk of cancer in observational studies, the general conclusion is that there is insufficient evidence for an effect of vitamin D supplementation on the risk of cancer, although there is some evidence for reduction in cancer mortality. Cardiovascular disease Vitamin D supplementation is not associated with a reduced risk of stroke, cerebrovascular disease, myocardial infarction, or ischemic heart disease. Supplementation does not lower blood pressure in the general population. One meta-analysis found a small increase in risk of stroke when calcium and vitamin D supplements were taken together. Immune system Vitamin D receptors are found in cell types involved in immunity. Functions are not understood. Some autoimmune and infectious diseases are associated with vitamin D deficiency, but either there is no evidence that supplementation has a benefit or not, or for some, evidence indicating there are no benefits. Autoimmune diseases Low plasma vitamin D concentrations have been reported for autoimmune thyroid diseases, lupus, myasthenia gravis, rheumatoid arthritis, and multiple sclerosis. For multiple sclerosis and rheumatoid arthritis, intervention trials using vitamin D supplementation did not demonstrate therapeutic effects. Infectious diseases In general, vitamin D functions to activate the innate and dampen the adaptive immune systems with antibacterial, antiviral and anti-inflammatory effects. Low serum levels of vitamin D appear to be a risk factor for tuberculosis. However, supplementation trials showed no benefit. Vitamin D supplementation at low doses may slightly decrease the overall risk of acute respiratory tract infections. The benefits were found in children and adolescents, and were not confirmed with higher doses. Inflammatory bowel disease Vitamin D deficiency has been linked to the severity of inflammatory bowel disease (IBD). However, whether vitamin D deficiency causes IBD or is a consequence of the disease is not clear. Supplementation leads to improvements in scores for clinical inflammatory bowel disease activity and biochemical markers, and less frequent relapse of symptoms in IBD. Asthma Vitamin D supplementation does not help prevent asthma attacks or alleviate symptoms. COVID-19 July 2020, the US National Institutes of Health stated "There is insufficient evidence to recommend for or against using vitamin D supplementation for the prevention or treatment of COVID-19." Same year, the UK National Institute for Health and Care Excellence (NICE) position was to not recommend to offer a vitamin D supplement to people solely to prevent or treat COVID-19. NICE updated its position in 2022 to "Do not use vitamin D to treat COVID-19 except as part of a clinical trial." Both organizations included recommendations to continue the previous established recommendations on vitamin D supplementation for other reasons, such as bone and muscle health, as applicable. Both organizations noted that more people may require supplementation due to lower amounts of sun exposure during the pandemic. Vitamin D deficiency and insufficiency have been associated with adverse outcomes in COVID-19. Supplementation trials, mostly large, single, oral dose upon hospital admission, reported lower subsequent transfers to intensive care and to all-cause mortality. Other diseases and conditions Chronic obstructive pulmonary disease Vitamin D supplementation substantially reduced the rate of moderate or severe exacerbations of chronic obstructive pulmonary disease (COPD). Diabetes A meta-analysis reported that vitamin D supplementation significantly reduced the risk of type 2 diabetes for non-obese people with prediabetes. Another meta-analysis reported that vitamin D supplementation significantly improved glycemic control [homeostatic model assessment-insulin resistance (HOMA-IR)], hemoglobin A1C (HbA1C), and fasting blood glucose (FBG) in individuals with type 2 diabetes. In prospective studies, high versus low levels of vitamin D were respectively associated with a significant decrease in risk of type 2 diabetes, combined type 2 diabetes and prediabetes, and prediabetes. A systematic review included one clinical trial that showed vitamin D supplementation together with insulin maintained levels of fasting C-peptide after 12 months better than insulin alone. Attention deficit hyperactivity disorder (ADHD) A meta-analysis of observational studies showed that children with ADHD have lower vitamin D levels and that there was a small association between low vitamin D levels at the time of birth and later development of ADHD. Several small, randomized controlled trials of vitamin D supplementation indicated improved ADHD symptoms such as impulsivity and hyperactivity. Depression Clinical trials of vitamin D supplementation for depressive symptoms have generally been of low quality and show no overall effect, although subgroup analysis showed supplementation for participants with clinically significant depressive symptoms or depressive disorder had a moderate effect. Cognition and dementia A systematic review of clinical studies found an association between low vitamin D levels with cognitive impairment and a higher risk of developing Alzheimer's disease. However, lower vitamin D concentrations are also associated with poor nutrition and spending less time outdoors. Therefore, alternative explanations for the increase in cognitive impairment exist and hence a direct causal relationship between vitamin D levels and cognition could not be established. Schizophrenia People diagnosed with schizophrenia tend to have lower serum vitamin D concentrations compared to those without the condition. This may be a consequence of the disease rather than a cause, due, for example, to low dietary vitamin D and less time spent exposed to sunlight. Results from supplementation trials have been inconclusive. Sexual dysfunction Erectile dysfunction can be a consequence of vitamin D deficiency. Mechanisms may include regulation of vascular stiffness, the production of vasodilating nitric oxide, and the regulation of vessel permeability. However, the clinical trial literature does not yet contain sufficient evidence that supplementation treats the problem. Part of the complexity is that vitamin D deficiency is also linked to morbidities that are associated with erectile dysfunction, such as obesity, hypertension, diabetes mellitus, hypercholesterolemia, chronic kidney disease and hypogonadism. In women, vitamin D receptors are expressed in the superficial layers of the urogenital organs. There is an association between vitamin D deficiency and a decline in sexual functions, including sexual desire, orgasm, and satisfaction in women, with symptom severity correlated with vitamin D serum concentration. The clinical trial literature does not yet contain sufficient evidence that supplementation reverses these dysfunctions or improves other aspects of vaginal or urogenital health. Pregnancy Pregnant women often do not take the recommended amount of vitamin D. Low levels of vitamin D in pregnancy are associated with gestational diabetes, pre-eclampsia, and small for gestational age infants. Although taking vitamin D supplements during pregnancy raises blood levels of vitamin D in the mother at term, the full extent of benefits for the mother or baby is unclear. Obesity Obesity increases the risk of having low serum vitamin D. Supplementation does not lead to weight loss, but weight loss increases serum vitamin D. The theory is that fatty tissue sequesters vitamin D. Bariatric surgery as a treatment for obesity can lead to vitamin deficiencies. Long-term follow-up reported deficiencies for vitamins D, E, A, K and B12, with D the most common at 36%. Uterine fibroids There is evidence that the pathogenesis of uterine fibroids is associated with low serum vitamin D and that supplementation reduces the size of fibroids. Allowed health claims Governmental regulatory agencies stipulate for the food and dietary supplement industries certain health claims as allowable as statements on packaging. Europe: European Food Safety Authority (EFSA) normal function of the immune system normal inflammatory response normal muscle function reduced risk of falling in people over age 60 US: Food and Drug Administration (FDA) "Adequate calcium and vitamin D, as part of a well-balanced diet, along with physical activity, may reduce the risk of osteoporosis." Canada: Health Canada "Adequate calcium and regular exercise may help to achieve strong bones in children and adolescents and may reduce the risk of osteoporosis in older adults. An adequate intake of vitamin D is also necessary." Japan: Foods with Nutrient Function Claims (FNFC) "Vitamin D is a nutrient which promotes the absorption of calcium in the gut intestine and aids in the development of bone." Dietary intake Recommended levels Various government institutions have proposed different recommendations for the amount of daily intake of vitamin D. These vary according to age, pregnancy or lactation, and the extent assumptions are made regarding skin synthesis. Older recommendations were lower. For example, the US Adequate Intake recommendations from 1997 were 200 IU/day for infants, children, adults to age 50 and women during pregnancy or lactation, 400 IU/day for ages 51–70 and 600 IU/day for 71 and older. Conversion: 1μg (microgram) = 40IU (international unit). For dietary recommendation and food labeling purposes government agencies consider vitamin D3 and D2 bioequivalent. United Kingdom The UK National Health Service (NHS) recommends that people at risk of vitamin D deficiency, breast-fed babies, formula-fed babies taking less than 500ml/day, and children aged 6 months to 4 years, should take daily vitamin D supplements throughout the year to ensure sufficient intake. This includes people with limited skin synthesis of vitamin D, who are not often outdoors, are frail, housebound, living in a care home, or usually wearing clothes that cover up most of the skin, or with dark skin, such as having an African, African-Caribbean or south Asian background. Other people may be able to make adequate vitamin D from sunlight exposure from April to September. The NHS and Public Health England recommend that everyone, including those who are pregnant and breastfeeding, consider taking a daily supplement containing 10μg (400 IU) of vitamin D during autumn and winter because of inadequate sunlight for vitamin D synthesis. United States The dietary reference intake for vitamin D issued in 2011 by the Institute of Medicine (IoM) (renamed National Academy of Medicine in 2015), superseded previous recommendations which were expressed in terms of adequate intake. The recommendations were formed assuming the individual has no skin synthesis of vitamin D because of inadequate sun exposure. The reference intake for vitamin D refers to total intake from food, beverages, and supplements, and assumes that calcium requirements are being met. The tolerable upper intake level (UL) is defined as "the highest average daily intake of a nutrient that is likely to pose no risk of adverse health effects for nearly all persons in the general population." Although ULs are believed to be safe, information on the long-term effects is incomplete and these levels of intake are not recommended for long-term consumption. For US food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For vitamin D labeling purposes, 100% of the daily value was 400IU (10μg), but in May 2016, it was revised to 800IU (20μg) to bring it into agreement with the recommended dietary allowance (RDA). A table of the old and new adult daily values is provided at Reference Daily Intake. Canada Health Canada published recommended dietary intakes (DRIs) and tolerable upper intake levels (ULs) for vitamin D. Australia and New Zealand Australia and New Zealand published nutrient reference values including guidelines for dietary vitamin D intake in 2006. About a third of Australians have vitamin D deficiency. European Union The European Food Safety Authority (EFSA) in 2016 reviewed the current evidence, finding the relationship between serum 25(OH)D concentration and musculoskeletal health outcomes is widely variable. They considered that average requirements and population reference intake values for vitamin D cannot be derived and that a serum 25(OH)D concentration of 50nmol/L was a suitable target value. For all people over the age of 1, including women who are pregnant or lactating, they set an adequate intake of 15μg/day (600IU). On the other hand the EU Commission defined nutrition labelling for foodstuffs as regards recommended daily allowances (RDA) for vitamin D to 5 µg/day (200 IU) as 100%. The EFSA reviewed safe levels of intake in 2012, setting the tolerable upper limit for adults at 100μg/day (4000IU), a similar conclusion as the IOM. The Swedish National Food Agency recommends a daily intake of 10μg (400IU) of vitamin D3 for children and adults up to 75 years, and 20μg (800IU) for adults 75 and older. Non-government organisations in Europe have made their own recommendations. The German Society for Nutrition recommends 20μg. The European Menopause and Andropause Society recommends postmenopausal women consume 15μg (600IU) until age 70, and 20μg (800IU) from age 71. This dose should be increased to 100μg (4,000IU) in some patients with very low vitamin D status or in case of co-morbid conditions. Food sources Few foods naturally contain vitamin D. Cod liver oil as a dietary supplement contains 450 IU/teaspoon. Fatty fish (but not lean fish such as tuna) are the best natural food sources of vitamin D3. Beef liver, eggs and cheese have modest amounts. Mushrooms provide variable amounts of vitamin D2, as mushrooms can be treated with UV light to greatly increase their content. In certain countries, breakfast cereals, dairy milk and plant milk products are fortified. Infant formulas are fortified with 400 to 1000 IU per liter, a normal volume for a full-term infant after first month. Cooking only minimally decreases vitamin content. Fortification In the early 1930s, the United States and countries in northern Europe began to fortify milk with vitamin D in an effort to eradicate rickets. This, plus medical advice to expose infants to sunlight, effectively ended the high prevalence of rickets. The proven health benefit of vitamin D led to fortification to many foods, even foods as inappropriate as hot dogs and beer. In the 1950s, due to some highly publicized cases of hypercalcemia and birth defects, vitamin D fortification became regulated, and in some countries discontinued. As of 2024, governments have established mandated or voluntary food fortification programs to combat deficiency in, respectively, 15 and 10 countries. Depending on the country, manufactured foods fortified with either vitamin D2 or D3 may include dairy milk and other dairy foods, fruit juices and fruit juice drinks, meal replacement food bars, soy protein-based beverages, wheat flour or corn meal products, infant formulas, breakfast cereals and 'plant milks', the last described as beverages made from soy, almond, rice, oats and other plant sources intended as alternatives to dairy milk. Biosynthesis Synthesis of vitamin D in nature is dependent on the presence of UV radiation and subsequent activation in the liver and in the kidneys. Many animals synthesize vitamin D3 from 7-dehydrocholesterol, and many fungi synthesize vitamin D2 from ergosterol. Vitamin D3 is produced photochemically from 7-dehydrocholesterol in the skin of most vertebrate animals, including humans. The skin consists of two primary layers: the inner layer called the dermis, and the outer, thinner epidermis. Vitamin D is produced in the keratinocytes of two innermost strata of the epidermis, the stratum basale and stratum spinosum, which also are able to produce calcitriol and express the vitamin D receptor. The 7-dehydrocholesterol reacts with UVB light at wavelengths of 290–315 nm. These wavelengths are present in sunlight, as well as in the light emitted by the UV lamps in tanning beds (which produce ultraviolet primarily in the UVA spectrum, but typically produce 4% to 10% of the total UV emissions as UVB). Exposure to light through windows is insufficient because glass almost completely blocks UVB light. In skin, either permanently in dark skin or temporarily due to tannnng, melanin is located in the stratum basale, where it blocks UVB light and thus inhibits vitamin D synthesis. The transformation in the skin that converts 7-dehydrocholesterol to vitamin D3 occurs in two steps. First, 7-dehydrocholesterol is photolyzed by ultraviolet light in a 6-electron conrotatory ring-opening electrocyclic reaction; the product is previtaminD3. Second, previtaminD3 spontaneously isomerizes to vitaminD3 (cholecalciferol) via a [1,7]-sigmatropic hydrogen shift. In fungi, the conversion from ergosterol to vitamin D2 follows a similar procedure, forming previtaminD2 by UVB photolysis, which isomerizes to vitamin D2 (ergocalciferol). Interactive pathway Click on View at bottom to open. Evolution For at least 1.2 billion years, eukaryotes - a classification of life forms that includes single-cell species, fungi, plants and animals, but not bacteria - have been able to synthesize 7-dehydrocholesterol. When this molecule is exposed to UVB light from the sun it absorbs the energy in the process of being conveted to vitamin D. The function was to prevent DNA damage, the vitamin molecule at this time being an end product without function. Present day, phytoplankton in the ocean photosynthesize vitamin D without any calcium management function. Ditto some species of algae, lichen, fungi and plants. Only circa 500 million years ago, when animals began to leave the oceans for land, did the vitamin molecule take on an hormone function as a promoter of calcium regulation. This function required development of a nuclear vitamin D receptor (VDR) that binds the biologically active vitamin D metabolite 1α,25-dihydroxyvitamin (D3), plasma transport proteins and vitamin D metabolizing CYP450 enzymes regulated by calciotropic hormones. The triumvarate of receptor protein, transport and metabolizing enzymes are found only in vertebrates. The initial function evolved for control of metabolic genes supporting innate and adaptive immunity. Only later did the VDR system start to functions as an important regulator of calcium supply for a calcified skeleton in land-based vertebrates. From amphibians onward, bone management is biodynamic, with bone functioning as internal calcium reservoir under the control of osteoclasts via the combined action of parathyroid hormone and 1α,25-dihydroxyvitamin D3 Thus, the vitamin D story started as inert molecule but gained an essential role for calcium and bone homeostasis in terrestrial animals to cope with the challenge of higher gravity and calcium-poor environment. Most herbivores produce vitamin D in response to sunlight. Llamas and alpacas out of their natural high altitude intense solar radiation environments are susceptible to vitamin D deficiency at low altitudes. Interestingly, domestic dogs and cats are practically incapable of vitamin D synthesis due to high activity of 7-dehydrocholesterol reductase, which converts any 7-dehydrocholesterol in the skin to cholesterol before it can be UVB light modified, but instead get vitamin D from diet. Human evolution During the long period between one and three million years ago, hominids, including ancestors of homo sapiens, underwent several evolutionary changes. A long-term climate shift toward drier conditions promoted life-changes from sedentary forest-dwelling with a primarily plant-based diet toward upright walking/running on open terrain and more meat consumption. One consequence of the shift to a culture that included more physically active hunting was a need for evaporative cooling from sweat, which to be functional, meant an evolutionary shift toward less body hair, as evaporation from sweat-wet hair would have cooled the hair but not the skin underneath. A second consequence was darker skin. The early humans who evolved in the regions of the globe near the equator had permanent large quantities of the skin pigment melanin in their skins, resulting in brown/black skin tones. For people with light skin tone, exposure to UV radiation induces synthesis of melanin causing the skin to darken, i.e., sun tanning. Either way, the pigment is able to provide protection by dissipating up to 99.9% of absorbed UV radiation. In this way, melanin protects skin cells from UVA and UVB radiation damage that causes photoaging and the risk of malignant melanoma, a cancer of melanin cells. Melanin also protects against photodegradation of the vitamin folate in skin tissue, and in the eyes, preserves eye health. The dark-skinned humans who had evolved in Africa populated the rest of the world through migration some 50,000 to 80,000 years ago. Following settlement in northward regions of Asia and Europe which seasonally get less sunlight, the selective pressure for radiation-protective skin tone decreased while a need for efficient vitamin D synthesis in skin increased, resulting in low-melanin, lighter skin tones in the rest of the prehistoric world. For people with low skin melanin, moderate sun exposure to the face, arms and lower legs several times a week is sufficient. However, for recent cultural changes such as indoor living and working, UV-blocking skin products to reduce the risk of sunburn and emigration of dark-skinned people to countries far from the equator have all contributed to an increased incidence of vitamin D insufficiency and deficiency that need to be addressed by food fortification and vitamin D dietary supplements. Industrial synthesis Vitamin D3 (cholecalciferol) is produced industrially by exposing 7-dehydrocholesterol to UVB and UVC light, followed by purification. The 7-dehydrocholesterol is sourced as an extraction from lanolin, a waxy skin secretion in sheep's wool. Vitamin D2 (ergocalciferol) is produced in a similar way using ergosterol from yeast as a starting material. Metabolism Activation Whether synthesized in the skin or ingested, vitamin D is hydroxylated in the liver at position 25 (upper right of the molecule) to form the prohormone calcifediol, also referred to as 25(OH)D). This reaction is catalyzed by the microsomal enzyme vitamin D 25-hydroxylase, the product of the CYP2R1 human gene. Once made, the product is released into the blood where it is bound to vitamin D-binding protein. Calcifediol is transported to the proximal tubules of the kidneys, where it is hydroxylated at the 1-α position (lower right of the molecule) to form calcitriol (1,25-dihydroxycholecalciferol, also referred to as 1,25(OH)2D). The conversion of calcifediol to calcitriol is catalyzed by the enzyme 25-hydroxyvitamin D3 1-alpha-hydroxylase, which is the product of the CYP27B1 human gene. The activity of CYP27B1 is increased by parathyroid hormone and also by low plasma calcium or phosphate. Following the final converting step in the kidney, calcitriol is released into the circulation. By binding to vitamin D-binding protein, calcitriol is transported throughout the body. In addition to the kidneys, calcitriol is also synthesized by certain other cells, including monocyte-macrophages in the immune system. When synthesized by monocyte-macrophages, calcitriol acts locally as a cytokine, modulating body defenses against microbial invaders by stimulating the innate immune system. Deactivation The bioactivity of calcitriol is terminated by hydroxylation at position 24 by vitamin D3 24-hydroxylase, coded for by gene CYP24A1, forming calcitetrol. Further metabolism yields calcitroic acid, a inactive water-soluble compound that is excreted in bile. VitaminD2 (ergocalciferol) and vitaminD3 (cholecalciferol) share a similar but not identical mechanism of action. Metabolites produced by vitamin D2 are named with an er- or ergo- prefix to differentiate them from the D3-based counterparts (sometimes with a chole- prefix). Metabolites produced from vitaminD2 tend to bind less well to the vitamin D-binding protein. VitaminD3 can alternatively be hydroxylated to calcifediol by sterol 27-hydroxylase, an enzyme coded for by gene CYP27A1, but vitaminD2 cannot. Ergocalciferol can be directly hydroxylated at position 24 by the enzyme coded for by CYP27A1. This hydroxylation also leads to a greater degree of inactivation: the activity of calcitriol decreases to 60% of original after 24-hydroxylation, whereas ercalcitriol undergoes a 10-fold decrease in activity on conversion to ercalcitetrol. Mechanism of action Calcitriol exerts its effects primarily by binding to the vitamin D receptor (VDR), which leads to the upregulation of gene transcription. In the absence of calcitriol, the VDR is mainly located in the cytoplasm of cells. Calcitriol enters cells and binds to the VDR which forms a complex with its coreceptor RXR and the activated VDR/RXR complex is translocated into the nucleus. The VDR/RXR complex subsequently binds to vitamin D response elements (VDRE) which are specific DNA sequences adjacent to genes, numbers estimated as being in the thousands. The VDR/RXR/DNA complex recruits other proteins that transcribe the downstream gene into mRNA which in turn is translated into protein causing a change in cell function. Some effects of vitamin D occur too rapidly to be explained by its influence on gene transcription. For example, calcitriol triggers rapid calcium uptake (within 1-10 minutes) by a variety of cells. These non-genomic actions may involve membrane-bound receptors like PDIA3. Genes regulated by the vitamin D receptor influence a wide range of physiological processes beyond just calcium homeostasis and bone metabolism. They play a significant role in immune function, cellular signaling, and even blood coagulation, demonstrating the broad impact of vitamin D regulated genes on human physiology. Examples of these genes are outlined below. Vitamin D receptor regulated genes involved in vitamin D metabolism are CYP27B1, which encodes the enzyme that produces active vitamin D. and CYP24A1, which encodes the enzyme responsible for degrading active vitamin D, In the area of calcium homeostasis and bone metabolism, several genes are regulated by vitamin D. These include TNFSF11 (RANKL), crucial for bone metabolism; SPP1 (Osteopontin), which is important for bone metabolism; and BGLAP (Osteocalcin), which is involved in bone mineralization. Additional genes include TRPV6, a calcium channel critical for intestinal calcium absorption; S100G (Calbindin-D9k), a calcium-binding protein that facilitates calcium translocation in enterocytes; ATP2B1 (PMCA1b), a plasma membrane calcium ATPase involved in calcium extrusion from the cell; and the S100A family of genes, which encode calcium-binding proteins involved in various cellular processes. Finally, vitamin D is integral to parathyroid hormone (PTH) regulation, exerting control over the PTH gene in a negative feedback loop to maintain calcium balance. Vitamin D also plays a role in immune function, influencing genes such as CAMP (Cathelicidin Antimicrobial Peptide), which is involved in innate immune responses; CD14, which participates in innate immune responses; and HLA class II genes, which are important for adaptive immune function. Cytokines such as IL2 and IL12, crucial for T cell responses, are also regulated by vitamin D. In the domain of blood coagulation, vitamin D regulates the expression of THBD (Thrombomodulin), a key gene involved in the coagulation process. Vitamin D also affects genes involved in cell differentiation and proliferation, including p21 and p27, which regulate the cell cycle, as well as transcription factors such as c-fos and c-myc, which are involved in cell proliferation. History In northern European countries, cod liver oil had a long history of folklore medical uses, including applied to the skin and taken orally as a treatment for rheumatism and gout. There were several extraction processes. Fresh livers cut to pieces and suspended on screens over pans of boiling water would drip oil that could be skimmed off the water, yielding a pale oil with a mild fish odor and flavor. For industrial purposes such as a lubricant, cod livers were placed in barrels to rot, with the oil skimmed off over months. The resulting oil was light to dark brown, and exceedingly foul smelling and tasting. In the 1800s, cod liver oil became popular as bottled medicinal products for oral consumption - a teaspoon a day - with both pale and brown oils were used. The trigger for the surge in oral use was the observation made in several European countries in the 1820s that young children fed cod liver oil did not develop rickets. Thus, the concept that a food could prevent a disease predated by 100 years the identification of a substance in the food that was responsible. In northern Europe and the United States, the practice of giving children cod liver oil to prevent rickets persisted well in the 1950s. This overlapped with the fortification of cow's milk with vitamin D, which began in the early 1930s. Vitamin D was identified and named in 1922. In 1914, American researchers Elmer McCollum and Marguerite Davis had discovered a substance in cod liver oil which later was named "vitamin A". Edward Mellanby, a British researcher, observed that dogs that were fed cod liver oil did not develop rickets, and (wrongly) concluded that vitamin A could prevent the disease. In 1922, McCollum tested modified cod liver oil in which the vitamin A had been destroyed. The modified oil cured the sick dogs, so McCollum concluded the factor in cod liver oil which cured rickets was distinct from vitamin A. He called it vitamin D because it was the fourth vitamin to be named. In 1925, it was established that when 7-dehydrocholesterol is irradiated with light, a form of a fat-soluble substance is produced, now known as vitamin D3. Adolf Windaus, at the University of Göttingen in Germany, received the Nobel Prize in Chemistry in 1928 "...for the services rendered through his research into the constitution of the sterols and their connection with the vitamins.” Alfred Fabian Hess, his research associate, stated: "Light equals vitamin D." In 1932, Otto Rosenheim and Harold King published a paper putting forward structures for sterols and bile acids, and soon thereafter collaborated with Kenneth Callow and others on isolation and characterization of vitamin D. Windaus further clarified the chemical structure of vitamin D. In 1969, a specific binding protein for vitamin D called the vitamin D receptor was identified. Shortly thereafter, the conversion of vitamin D to calcifediol and then to calcitriol, the biologically active form, was confirmed. The photosynthesis of vitamin D3 in skin via previtamin D3 and its subsequent metabolism was described in 1980.
Biology and health sciences
Vitamins
Health
24998792
https://en.wikipedia.org/wiki/Debugging
Debugging
In engineering, debugging is the process of finding the root cause, workarounds and possible fixes for bugs. For software, debugging tactics can involve interactive debugging, control flow analysis, log file analysis, monitoring at the application or system level, memory dumps, and profiling. Many programming languages and software development tools also offer programs to aid in debugging, known as debuggers. Etymology The term bug, in the sense of defect, dates back at least to 1878 when Thomas Edison wrote "little faults and difficulties" in his inventions as "Bugs". A popular story from the 1940s is from Admiral Grace Hopper. While she was working on a Mark II computer at Harvard University, her associates discovered a moth stuck in a relay that impeded operation and wrote in a log book "First actual case of a bug being found". Although probably a joke, conflating the two meanings of bug (biological and defect), the story indicates that the term was used in the computer field at that time. Similarly, the term debugging was used in aeronautics before entering the world of computers. A letter from J. Robert Oppenheimer, director of the WWII atomic bomb Manhattan Project at Los Alamos, used the term in a letter to Dr. Ernest Lawrence at UC Berkeley, dated October 27, 1944, regarding the recruitment of additional technical staff. The Oxford English Dictionary entry for debug uses the term debugging in reference to airplane engine testing in a 1945 article in the Journal of the Royal Aeronautical Society. An article in "Airforce" (June 1945 p. 50) refers to debugging aircraft cameras. The seminal article by Gill in 1951 is the earliest in-depth discussion of programming errors, but it does not use the term bug or debugging. In the ACM's digital library, the term debugging is first used in three papers from 1952 ACM National Meetings. Two of the three use the term in quotation marks. By 1963 debugging was a common-enough term to be mentioned in passing without explanation on page 1 of the CTSS manual. Scope As software and electronic systems have become generally more complex, the various common debugging techniques have expanded with more methods to detect anomalies, assess impact, and schedule software patches or full updates to a system. The words "anomaly" and "discrepancy" can be used, as being more neutral terms, to avoid the words "error" and "defect" or "bug" where there might be an implication that all so-called errors, defects or bugs must be fixed (at all costs). Instead, an impact assessment can be made to determine if changes to remove an anomaly (or discrepancy) would be cost-effective for the system, or perhaps a scheduled new release might render the unnecessary. Not all issues are safety-critical or mission-critical in a system. Also, it is important to avoid the situation where a change might be more upsetting to users, long-term, than living with the known (where the "cure would be worse than the disease"). Basing decisions of the acceptability of some anomalies can avoid a culture of a "zero-defects" mandate, where people might be tempted to deny the existence of problems so that the result would appear as zero defects. Considering the collateral issues, such as the cost-versus-benefit impact assessment, then broader debugging techniques will expand to determine the frequency of anomalies (how often the same "bugs" occur) to help assess their impact to the overall system. Tools Debugging ranges in complexity from fixing simple errors to performing lengthy and tiresome tasks of data collection, analysis, and scheduling updates. The debugging skill of the programmer can be a major factor in the ability to debug a problem, but the difficulty of software debugging varies greatly with the complexity of the system, and also depends, to some extent, on the programming language(s) used and the available tools, such as debuggers. Debuggers are software tools which enable the programmer to monitor the execution of a program, stop it, restart it, set breakpoints, and change values in memory. The term debugger can also refer to the person who is doing the debugging. Generally, high-level programming languages, such as Java, make debugging easier, because they have features such as exception handling and type checking that make real sources of erratic behaviour easier to spot. In programming languages such as C or assembly, bugs may cause silent problems such as memory corruption, and it is often difficult to see where the initial problem happened. In those cases, memory debugger tools may be needed. In certain situations, general purpose software tools that are language specific in nature can be very useful. These take the form of static code analysis tools. These tools look for a very specific set of known problems, some common and some rare, within the source code, concentrating more on the semantics (e.g. data flow) rather than the syntax, as compilers and interpreters do. Both commercial and free tools exist for various languages; some claim to be able to detect hundreds of different problems. These tools can be extremely useful when checking very large source trees, where it is impractical to do code walk-throughs. A typical example of a problem detected would be a variable dereference that occurs before the variable is assigned a value. As another example, some such tools perform strong type checking when the language does not require it. Thus, they are better at locating likely errors in code that is syntactically correct. But these tools have a reputation of false positives, where correct code is flagged as dubious. The old Unix lint program is an early example. For debugging electronic hardware (e.g., computer hardware) as well as low-level software (e.g., BIOSes, device drivers) and firmware, instruments such as oscilloscopes, logic analyzers, or in-circuit emulators (ICEs) are often used, alone or in combination. An ICE may perform many of the typical software debugger's tasks on low-level software and firmware. Debugging process The debugging process normally begins with identifying the steps to reproduce the problem. This can be a non-trivial task, particularly with parallel processes and some Heisenbugs for example. The specific user environment and usage history can also make it difficult to reproduce the problem. After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, a bug in a compiler can make it crash when parsing a large source file. However, after simplification of the test case, only few lines from the original source file can be sufficient to reproduce the same crash. Simplification may be done manually using a divide-and-conquer approach, in which the programmer attempts to remove some parts of original test case then checks if the problem still occurs. When debugging in a GUI, the programmer can try skipping some user interaction from the original problem description to check if the remaining actions are sufficient for causing the bug to occur. After the test case is sufficiently simplified, a programmer can use a debugger tool to examine program states (values of variables, plus the call stack) and track down the origin of the . Alternatively, tracing can be used. In simple cases, tracing is just a few print statements which output the values of variables at particular points during the execution of the program. Techniques Interactive debugging uses debugger tools which allow a program's execution to be processed one step at a time and to be paused to inspect or alter its state. Subroutines or function calls may typically be executed at full speed and paused again upon return to their caller, or themselves single stepped, or any mixture of these options. Setpoints may be installed which permit full speed execution of code that is not suspected to be faulty, and then stop at a point that is. Putting a setpoint immediately after the end of a program loop is a convenient way to evaluate repeating code. Watchpoints are commonly available, where execution can proceed until a particular variable changes, and catchpoints which cause the debugger to stop for certain kinds of program events, such as exceptions or the loading of a shared library. or tracing is the act of watching (live or recorded) trace statements, or print statements, that indicate the flow of execution of a process and the data progression. Tracing can be done with specialized tools (like with GDB's trace) or by insertion of trace statements into the source code. The latter is sometimes called , due to the use of the printf function in C. This kind of debugging was turned on by the command TRON in the original versions of the novice-oriented BASIC programming language. TRON stood for, "Trace On." TRON caused the line numbers of each BASIC command line to print as the program ran. Activity tracing is like tracing (above), but rather than following program execution one instruction or function at a time, follows program activity based on the overall amount of time spent by the processor/CPU executing particular segments of code. This is typically presented as a fraction of the program's execution time spent processing instructions within defined memory addresses (machine code programs) or certain program modules (high level language or compiled programs). If the program being debugged is shown to be spending an inordinate fraction of its execution time within traced areas, this could indicate misallocation of processor time caused by faulty program logic, or at least inefficient allocation of processor time that could benefit from optimization efforts. is the process of debugging a program running on a system different from the debugger. To start remote debugging, a debugger connects to a remote system over a communications link such as a local area network. The debugger can then control the execution of the program on the remote system and retrieve information about its state. Post-mortem debugging is debugging of the program after it has already crashed. Related techniques often include various tracing techniques like examining log files, outputting a call stack on the crash, and analysis of memory dump (or core dump) of the crashed process. The dump of the process could be obtained automatically by the system (for example, when the process has terminated due to an unhandled exception), or by a programmer-inserted instruction, or manually by the interactive user. "Wolf fence" algorithm: Edward Gauss described this simple but very useful and now famous algorithm in a 1982 article for Communications of the ACM as follows: "There's one wolf in Alaska; how do you find it? First build a fence down the middle of the state, wait for the wolf to howl, determine which side of the fence it is on. Repeat process on that side only, until you get to the point where you can see the wolf." This is implemented e.g. in the Git version control system as the command git bisect, which uses the above algorithm to determine which commit introduced a particular bug. Record and replay debugging is the technique of creating a program execution recording (e.g. using Mozilla's free rr debugging tool; enabling reversible debugging/execution), which can be replayed and interactively debugged. Useful for remote debugging and debugging intermittent, non-deterministic, and other hard-to-reproduce defects. Time travel debugging is the process of stepping back in time through source code (e.g. using Undo LiveRecorder) to understand what is happening during execution of a computer program; to allow users to interact with the program; to change the history if desired and to watch how the program responds. Delta Debugging a technique of automating test case simplification. Saff Squeeze a technique of isolating failure within the test using progressive inlining of parts of the failing test. Causality tracking: There are techniques to track the cause effect chains in the computation. Those techniques can be tailored for specific bugs, such as null pointer dereferences. Automatic bug fixing Debugging for embedded systems In contrast to the general purpose computer software design environment, a primary characteristic of embedded environments is the sheer number of different platforms available to the developers (CPU architectures, vendors, operating systems, and their variants). Embedded systems are, by definition, not general-purpose designs: they are typically developed for a single task (or small range of tasks), and the platform is chosen specifically to optimize that application. Not only does this fact make life tough for embedded system developers, it also makes debugging and testing of these systems harder as well, since different debugging tools are needed for different platforms. Despite the challenge of heterogeneity mentioned above, some debuggers have been developed commercially as well as research prototypes. Examples of commercial solutions come from Green Hills Software, Lauterbach GmbH and Microchip's MPLAB-ICD (for in-circuit debugger). Two examples of research prototype tools are Aveksha and Flocklab. They all leverage a functionality available on low-cost embedded processors, an On-Chip Debug Module (OCDM), whose signals are exposed through a standard JTAG interface. They are benchmarked based on how much change to the application is needed and the rate of events that they can keep up with. In addition to the typical task of identifying bugs in the system, embedded system debugging also seeks to collect information about the operating states of the system that may then be used to analyze the system: to find ways to boost its performance or to optimize other important characteristics (e.g. energy consumption, reliability, real-time response, etc.). Anti-debugging Anti-debugging is "the implementation of one or more techniques within computer code that hinders attempts at reverse engineering or debugging a target process". It is actively used by recognized publishers in copy-protection schemas, but is also used by malware to complicate its detection and elimination. Techniques used in anti-debugging include: API-based: check for the existence of a debugger using system information Exception-based: check to see if exceptions are interfered with Process and thread blocks: check whether process and thread blocks have been manipulated Modified code: check for code modifications made by a debugger handling software breakpoints Hardware- and register-based: check for hardware breakpoints and CPU registers Timing and latency: check the time taken for the execution of instructions Detecting and penalizing debugger An early example of anti-debugging existed in early versions of Microsoft Word which, if a debugger was detected, produced a message that said, "The tree of evil bears bitter fruit. Now trashing program disk.", after which it caused the floppy disk drive to emit alarming noises with the intent of scaring the user away from attempting it again.
Technology
Software development: General
null
24999087
https://en.wikipedia.org/wiki/Heat%20stroke
Heat stroke
Heat stroke or heatstroke, also known as sun-stroke, is a severe heat illness that results in a body temperature greater than , along with red skin, headache, dizziness, and confusion. Sweating is generally present in exertional heatstroke, but not in classic heatstroke. The start of heat stroke can be sudden or gradual. Heatstroke is a life-threatening condition due to the potential for multi-organ dysfunction, with typical complications including seizures, rhabdomyolysis, or kidney failure. Heat stroke occurs because of high external temperatures and/or physical exertion. It usually occurs under preventable prolonged exposure to extreme environmental or exertional heat. However, certain health conditions can increase the risk of heat stroke, and patients, especially children, with certain genetic predispositions are vulnerable to heatstroke under relatively mild conditions. Preventive measures include drinking sufficient fluids and avoiding excessive heat. Treatment is by rapid physical cooling of the body and supportive care. Recommended methods include spraying the person with water and using a fan, putting the person in ice water, or giving cold intravenous fluids. Adding ice packs around a person is beneficial but does not by itself achieve the fastest possible cooling. Heat stroke results in more than 600 deaths a year in the United States. Rates increased between 1995 and 2015. Purely exercise-induced heat stroke, though a medical emergency, tends to be self-limiting (the patient stops exercising from cramp or exhaustion) and fewer than 5% of cases are fatal. Non-exertional heatstroke is a much greater danger: even the healthiest person, if left in a heatstroke-inducing environment without medical attention, will continue to deteriorate to the point of death, and 65% of the most severe cases are fatal even with treatment. Signs and symptoms Heat stroke generally presents with a hyperthermia of greater than in combination with disorientation. There is generally a lack of sweating in classic heatstroke, while sweating is generally present in exertional heatstroke. Early symptoms of heat stroke include behavioral changes, confusion, delirium, dizziness, weakness, agitation, combativeness, slurred speech, nausea, and vomiting. In some individuals with exertional heatstroke, seizures and sphincter incontinence have also been reported. Additionally, in exertional heat stroke, the affected person may sweat excessively. Rhabdomyolysis, which is characterized by skeletal muscle breakdown with the products of muscle breakdown entering the bloodstream and causing organ dysfunction, is seen with exertional heatstroke. If treatment is delayed, patients could develop vital organ damage, unconsciousness and even organ failure. In the absence of prompt and adequate treatment, heatstroke can be fatal. Causes Heat stroke occurs when thermoregulation is overwhelmed by a combination of excessive metabolic production of heat (exertion), excessive heat in the physical environment, and insufficient or impaired heat loss, resulting in an abnormally high body temperature. Substances that inhibit cooling and cause dehydration such as alcohol, stimulants, medications, and age-related physiological changes predispose to so-called "classic" or non-exertional heat stroke (NEHS), most often in elderly and infirm individuals in summer situations with insufficient ventilation. Young children have age specific physiologic differences that make them more susceptible to heat stroke including an increased surface area to mass ratio (leading to increased environmental heat absorption), an underdeveloped thermoregulatory system, a decreased sweating rate and a decreased blood volume to body size ratio (leading to decreased compensatory heat dissipation by redirecting blood to the skin). Exertional heat stroke Exertional heat stroke (EHS) can happen in young people without health problems or medications most often in athletes, outdoor laborers, or military personnel engaged in strenuous hot-weather activity or in first responders wearing heavy personal protective equipment. In environments that are not only hot but also humid, it is important to recognize that humidity reduces the degree to which the body can cool itself by perspiration and evaporation. For humans and other warm-blooded animals, excessive body temperature can disrupt enzymes regulating biochemical reactions that are essential for cellular respiration and the functioning of major organs. Cars When the outside temperature is , the temperature inside a car parked in direct sunlight can quickly exceed . Young children or elderly adults left alone in a vehicle are at particular risk of succumbing to heat stroke. "Heat stroke in children and in the elderly can occur within minutes, even if a car window is opened slightly." As these groups of individuals may not be able to open car doors or to express discomfort verbally (or audibly, inside a closed car), their plight may not be immediately noticed by others in the vicinity. In 2018, 51 children in the United States died in hot cars, more than the previous high of 49 in 2010. Dogs are even more susceptible than humans to heat stroke in cars, as they cannot produce whole-body sweat to cool themselves. Leaving the dog at home with plenty of water on hot days is recommended instead, or, if a dog must be brought along, it can be tied up in the shade outside the destination and provided with a full water bowl. Pathophysiology The pathophysiology of heat stroke involves an intense heat overload followed by a failure of the body's thermoregulatory mechanisms. More specifically, heat stroke leads to inflammatory and coagulation responses that can damage the vascular endothelium and result in numerous platelet complications, including decreased platelet counts, platelet clumping, and suppressed platelet release from bone marrow. Growing evidence also suggests the existence of a second pathway underlying heat stroke that involves heat and exercise-driven endotoxemia. Although its exact mechanism is not yet fully understood, this model theorizes that extreme exercise and heat disrupt the intestinal barrier by making it more permeable and allowing lipopolysaccharides (LPS) from gram-negative bacteria within the gut to move into the circulatory system. High blood LPS levels can then trigger a systemic inflammatory response and eventually lead to sepsis and related consequences like blood coagulation, multi-organ failure, necrosis, and central nervous system dysfunction. Diagnosis Heat stroke is a clinical diagnosis, based on signs and symptoms. It is diagnosed based on an elevated core body temperature (usually above 40 degrees Celsius), a history of heat exposure or physical exertion, and neurologic dysfunction. However, high body temperature does not necessarily indicate that heat stroke is present, such as with people in high-performance endurance sports or with people experiencing fevers. In others with heatstroke, the core body temperature is not always above 40 degrees Celsius. Therefore, heat stroke is more accurately diagnosed based on a constellation of symptoms rather than just a specific temperature threshold. Tachycardia (or a rapid heart rate), tachypnea (rapid breathing) and hypotension (low blood pressure) are common clinical findings. Those with classic heat stroke usually have dry skin, whereas those with exertional heat stroke usually have wet or sweaty skin. A core body temperature (such as a rectal temperature) is the preferred method for monitoring body temperature in the diagnosis and management of heat stroke as it is more accurate than peripheral body temperatures (such as an oral or axillary temperatures). Other conditions which may present similarly to heat stroke include meningitis, encephalitis, epilepsy, drug toxicity, severe dehydration, and certain metabolic syndromes such as serotonin syndrome, neuroleptic malignant syndrome, malignant hyperthermia and thyroid storm. Prevention The risk of heat stroke can be reduced by observing precautions to avoid overheating and dehydration. Light, loose-fitting clothes will allow perspiration to evaporate and cool the body. Wide-brimmed hats in light colors help prevent the sun from warming the head and neck. Vents on a hat will help cool the head, as will sweatbands wetted with cool water. Strenuous exercise should be avoided during hot weather, especially in the sun peak hours. Strenuous exercise should also be avoided if a person is ill and exercise intensity should match one's fitness level. Avoiding confined spaces (such as automobiles) without air-conditioning or adequate ventilation. During heat waves and hot seasons further measures that can be taken to avoid classic heat stroke include staying in air conditioned areas, using fans, taking frequent cold showers, and increasing social contact and well being checks (especially for the elderly or disabled persons). In hot weather, people need to drink plenty of cool liquids and mineral salts to replace fluids lost from sweating. Thirst is not a reliable sign that a person needs fluids. A better indicator is the color of urine. A dark yellow color may indicate dehydration. Some measures that can help protect workers from heat stress include: Know signs/symptoms of heat-related illnesses. Block out direct sun and other heat sources. Drink fluids often, and before you are thirsty. Wear lightweight, light-colored, loose-fitting clothes. Avoid beverages containing alcohol or caffeine. Treatment Treatment of heat stroke involves rapid mechanical cooling along with standard resuscitation measures. The body temperature must be lowered quickly via conduction, convection, or evaporation. During cooling, the body temperature should be lowered to less than 39 degrees Celsius, ideally less than 38-38.5 degrees Celsius. In the field, the person should be moved to a cool area, such as indoors or to a shaded area. Clothing should be removed to promote heat loss through passive cooling. Conductive cooling methods such as ice-water immersion should also be used, if possible. Evaporative and convective cooling by a combination of cool water spray or cold compresses with constant air flow over the body, such as with a fan or air-conditioning unit, is also an effective alternative. In hospital mechanical cooling methods include ice water immersion, infusion of cold intravenous fluids, placing ice packs or wet gauze around the person, and fanning. Aggressive ice-water immersion remains the gold standard for exertional heat stroke and may also be used for classic heat stroke. This method may require the effort of several people and the person should be monitored carefully during the treatment process. Immersion should be avoided for an unconscious person but, if there is no alternative, it can be applied with the person's head above water. A rapid and effective cooling usually reverses concomitant organ dysfunction. Immersion in very cold water was once thought to be counterproductive by reducing blood flow to the skin and thereby preventing heat from escaping the body core. However, research has shown that this mechanism does not play a dominant role in the decrease in core body temperature brought on by cold water. Dantrolene, a muscle relaxant used to treat other forms of hyperthermia, is not an effective treatment for heat stroke. Antipyretics such as aspirin and acetaminophen are also not recommended as a means to lower body temperature in the treatment of heat stroke and their use may lead to worsening liver damage. A cardiopulmonary resuscitation (CPR) may be necessary if the person goes into cardiac arrest. The person's condition should be reassessed and stabilized by trained medical personnel. And the person's heart rate and breathing should be monitored. IV fluid resuscitation is usually needed for circulatory failure and organ dysfunction and is also indicated if rhabdomyolysis is present. In severe cases hemodialysis and ventilator support may be needed. Prognosis In elderly people who experience classic heat stroke the mortality exceeds 50%. The mortality rate in exertional heat stroke is less than 5%. It was long believed that heat strokes lead only rarely to permanent deficits and that convalescence is almost complete. However, following the 1995 Chicago heat wave, researchers from the University of Chicago Medical Center studied all 58 patients with heat stroke severe enough to require intensive care at 12 area hospitals between July 12 and 20, 1995, ranging in age from 25 to 95 years. Nearly half of these patients died within a year 21 percent before and 28 percent after release from the hospital. Many of the survivors had permanent loss of independent function; one-third had severe functional impairment at discharge, and none of them had improved after one year. The study also recognized that because of overcrowded conditions in all the participating hospitals during the crisis, the immediate care which is critical was not as comprehensive as it should have been. In rare cases, brain damage has been reported as a permanent sequela of severe heat stroke, most commonly cerebellar atrophy. Epidemiology Various aspects can affect the incidence of heat stroke, including sex, age, geographical location, and occupation. The incidence of heat stroke is higher among men; however, the incidence of other heat illnesses is higher among women. The incidence of other heat illnesses in women compared with men ranged from 1.30 to 2.89 per 1000 person-years versus 0.98 to 1.98 per 1000 person-years. Different parts of the world also have different rates of heat stroke. During the 2003 European heatwave more than 70,000 people died of heat related illnesses, and during the 2022 European heatwave, 61,672 people died from heat related illnesses. Society and culture In Slavic mythology, there is a personification of sunstroke, Poludnitsa (lady midday), a feminine demon clad in white that causes impairment or death to people working in the fields at midday. There was a traditional short break in harvest work at noon, to avoid attack by the demon. Antonín Dvořák's symphonic poem, The Noon Witch, was inspired by this tradition. Other animals Heatstroke can affect livestock, especially in hot, humid weather; or if the horse, cow, sheep or other is unfit, overweight, has a dense coat, is overworked, or is left in a horsebox in full sun. Symptoms include drooling, panting, high temperature, sweating, and rapid pulse. The animal should be moved to shade, drenched in cold water and offered water or electrolyte to drink.
Biology and health sciences
Types
Health
25003876
https://en.wikipedia.org/wiki/Pungency
Pungency
Pungency () refers to the taste of food commonly referred to as spiciness, hotness or heat, found in foods such as chili peppers. Highly pungent tastes may be experienced as unpleasant. The term piquancy () is sometimes applied to foods with a lower degree of pungency that are "agreeably stimulating to the palate". Examples of piquant food include mustard and curry. The primary substances responsible for pungent taste are capsaicin, piperine (in peppers) and allyl isothiocyanate (in radish, mustard and wasabi). Terminology In colloquial speech, the term "pungency" can refer to any strong, sharp smell or flavor. However, in scientific speech, it refers specifically to the "hot" or "spicy" quality of chili peppers. It is the preferred term by scientists as it eliminates the ambiguity arising from use of "hot", which can also refer to temperature, and "spicy", which can also refer to spices. For instance, a pumpkin pie can be both hot (out of the oven) and spicy (due to the common inclusion of spices such as cinnamon, nutmeg, allspice, mace, and cloves), but it is not pungent. Conversely, pure capsaicin is pungent, yet it is not naturally accompanied by a hot temperature or spices. As the Oxford, Collins, and Merriam-Webster dictionaries explain, "piquancy" can refer to mild pungency, that is, flavors and spices that are much less strong than chilli peppers, including, for example, the strong flavor of some tomatoes. In other words, pungency always refers to a very strong taste whereas piquancy refers to any spices and foods that are "agreeably stimulating to the palate", in other words to food that is spicy in the general sense of "well-spiced". Mildly pungent or sour foods may be referred to as tangy. Uses Pungency is often quantified in scales that range from mild to hot. The Scoville scale measures the pungency of chili peppers, as defined by the amount of capsaicin they contain. Pungency is not considered a taste in the technical sense because it is carried to the brain by a different set of nerves. While taste nerves are activated when consuming foods like chili peppers, the sensation commonly interpreted as "hot" results from the stimulation of somatosensory fibers in the mouth. Many parts of the body with exposed membranes that lack taste receptors (such as the nasal cavity, genitals, or a wound) produce a similar sensation of heat when exposed to pungent agents. The pungent sensation provided by chili peppers, black pepper and other spices like ginger and horseradish plays an important role in a diverse range of cuisines across the world. Pungent substances, like capsaicin, are used in topical analgesics and pepper sprays. Mechanism Pungency is sensed via chemesthesis, the sensitivity of the skin and mucous membranes to chemical substances. Substances such as piperine, capsaicin, and thiosulfinates can cause a burning or tingling sensation by inducing a trigeminal nerve stimulation together with normal taste reception. The pungent feeling caused by allyl isothiocyanate, capsaicin, piperine, and allicin is caused by activation of the heat thermo- and chemosensitive TRP ion channels including TRPV1 and TRPA1 nociceptors. The pungency of chilies may be an adaptive response to microbial pathogens. Favoring by humans Capsaicin evolved in peppers to deter particularly seed-eating rodents that destroy seeds by grinding, thwarting their germination, while at the same allowing birds to eat them and disperse through much greater distances via defecation, thus also preventing the new seedlings from competing for natural resources with their parent plant (in birds, pepper seeds are not destroyed by consumption and digestion). It was found that birds do not feel pungency due to lack of TRP channels, but mammals, including rodents and humans, do have them. Unlike most other mammals, however, many humans favor pungent and spicy food (including traditionally spicy regional cuisines). Multiple reasons for that have been proposed. The thrill-seeking theory suggests that some people are attracted to spicy taste due to intense sensations or thrills. According to the antimicrobial theory, general spices have been added to foods in hot climates due to antimicrobial properties of related substances. The only other mammal known to consume pungent food is northern treeshrew (Tupaia belangeri).
Biology and health sciences
Sensory nervous system
Biology
39152786
https://en.wikipedia.org/wiki/Lionfish
Lionfish
Pterois is a genus of venomous marine fish, commonly known as the lionfish, native to the Indo-Pacific. It is characterized by conspicuous warning coloration with red or black bands and ostentatious dorsal fins tipped with venomous spines. Pterois radiata, Pterois volitans, and Pterois miles are the most commonly studied species in the genus. Pterois species are popular aquarium fish. P. volitans and P. miles are recent and significant invasive species in the west Atlantic, Caribbean Sea and Mediterranean Sea. Taxonomy Pterois was described as a genus in 1817 by German naturalist, botanist, biologist, and ornithologist Lorenz Oken. In 1856 the French naturalist Eugène Anselme Sébastien Léon Desmarest designated Scorpaena volitans, which had been named by Bloch in 1787 and which was the same as Linnaeus's 1758 Gasterosteus volitans, as the type species of the genus. This genus is classified within the tribe Pteroini of the subfamily Scorpaeninae within the family Scorpaenidae. The genus name Pterois is based on Georges Cuvier's 1816 French name, "Les Pterois", meaning "fins" which is an allusion to the high dorsal and long pectoral fins. Species Currently, 12 recognized species are in this genus: Molecular studies and morphological data have indicated that P. lunulata is a junior synonym of P. russelii, and that P. volitans may be a hybrid between P. miles and P. russelii sensu lato. Description According to the National Oceanic and Atmospheric Administration, "lionfish have distinctive brown or maroon, and white stripes or bands covering the head and body. They have fleshy tentacles above their eyes and below the mouth; fan-like pectoral fins; long, separated dorsal spines; 13 dorsal spines; 10–11 dorsal soft rays; 3 anal spines; and 6–7 anal soft rays. An adult lionfish can grow as large as 18 inches." Juvenile lionfish have a unique tentacle located above their eye sockets that varies in phenotype between species. The evolution of this tentacle is suggested to serve to continually attract new prey; studies also suggest it plays a role in sexual selection. Ecology and behavior Pterois species can live from 5 to 15 years and have complex courtship and mating behaviors. Females frequently release two mucus-filled egg clusters, which can contain as many as 15,000 eggs. In total, they can lay up about 2 million eggs per year. All species are aposematic; they have conspicuous coloration with boldly contrasting stripes and wide fans of projecting spines, advertising their ability to defend themselves. Prey Pterois prey mostly on small fish, invertebrates, and mollusks, with up to six different species of prey found in the gastrointestinal tracts of some specimens. Lionfish feed most actively in the morning. Lionfish are skilled hunters, using specialized swim bladder muscles to provide precise control of their location in the water column, allowing them to alter their center of gravity the better to attack prey. They blow jets of water while approaching prey, which serves to confuse them and alter the orientation of the prey so that the smaller fish is facing the lionfish. This results in a higher degree of predatory efficiency as head-first capture is easier for the lionfish. The lionfish then spreads its large pectoral fins and swallows its prey in a single motion. Predators and parasites Aside from instances of larger lionfish individuals engaging in cannibalism on smaller individuals, adult lionfish have no known natural predators, likely due to the effectiveness of their venomous spines: when threatened, a lionfish will orient its body to keep its dorsal fin pointed at the predator, even if this means swimming upsidedown. This does not always save it, however: Moray eels, bluespotted cornetfish, barracuda and large groupers have been observed preying on lionfish. Sharks are also believed to be capable of preying on lionfish with no ill effects from their spines. Park officials of the Roatan Marine Park in Honduras have attempted to train sharks to feed on lionfish to control the invasive populations in the Caribbean. The Bobbit worm, an ambush predator, has been filmed preying upon lionfish in Indonesia. Predators of larvae and juvenile lionfish remain unknown, but may prove to be the primary limiting factor of lionfish populations in their native range. Parasites of lionfish have rarely been observed, and are assumed to be infrequent. They include isopods and leeches. Interaction with humans Lionfish are known for their venomous fin rays, which makes them hazardous to other marine animals, as well as humans. Pterois venom produced negative inotropic and chronotropic effects when tested in both frog and clam hearts and has a depressive effect on rabbit blood pressure. These results are thought to be due to nitric oxide release. In humans, Pterois venom can cause systemic effects such as pain, nausea, vomiting, fever, headache, numbness, paresthesia, diarrhea, sweating, temporary paralysis of the limbs, respiratory insufficiency, heart failure, convulsions, and even death. Fatalities are more common in very young children, the elderly, or those who are allergic to the venom. The venom is rarely fatal to healthy adults, but some species have enough venom to produce extreme discomfort for a period of several days. Moreover, Pterois venom poses a danger to allergic victims as they may experience anaphylaxis, a serious and often life-threatening condition that requires immediate emergency medical treatment. Severe allergic reactions to Pterois venom include chest pain, severe breathing difficulties, a drop in blood pressure, swelling of the tongue, sweating, or slurred speech. Such reactions can be fatal if not treated. Lionfish are edible if prepared correctly. Native range and habitat The lionfish is native to the Indian Ocean and Western Pacific Ocean. They can be found around the seaward edge of shallow coral reefs, lagoons, rocky substrates, and on mesophotic reefs, and can live in areas of varying salinity, temperature, and depth. They are also frequently found in turbid inshore areas and harbors, and have a generally hostile attitude and are territorial toward other reef fish. They are commonly found from shallow waters down to past depth, and have in several locations been recorded to 300 m depth. Many universities in the Indo-Pacific have documented reports of Pterois aggression toward divers and researchers. P. volitans and P. miles are native to subtropical and tropical regions from southern Japan and southern Korea to the east coast of Australia, Indonesia, Micronesia, French Polynesia, and the South Pacific Ocean. P. miles is also found in the Indian Ocean, from Sumatra to Sri Lanka and the Red Sea. Invasive introduction and range Western tropical Atlantic Two of the 12 species of Pterois, the red lionfish (P. volitans) and the common lionfish (P. miles), have established themselves as significant invasive species off the East Coast of the United States and in the Caribbean. About 93% of the invasive population in the Western Atlantic is P. volitans. The red lionfish is found off the East Coast and Gulf Coast of the United States and in the Caribbean Sea, and was likely first introduced off the Florida coast by the early to mid-1980s. This introduction may have occurred in 1992 when Hurricane Andrew destroyed an aquarium in southern Florida, releasing six lionfish into Biscayne Bay. A lionfish was discovered off the coast of Dania Beach, south Florida, as early as 1985, before Hurricane Andrew. The lionfish resemble those of the Philippines, implicating the aquarium trade, suggesting individuals may have been purposely discarded by dissatisfied aquarium enthusiasts. This is in part because lionfish require an experienced aquarist, but are often sold to novices who find their care too difficult. In 2001, the National Oceanic and Atmospheric Administration (NOAA) documented several sightings of lionfish off the coast of Florida, Georgia, South Carolina, North Carolina, Bermuda, and Delaware. In August 2014, when the Gulf Stream was discharging into the mouth of the Delaware Bay, two lionfish were caught by a surf fisherman off the ocean side shore of Cape Henlopen State Park: a red lionfish that weighed and a common lionfish that weighed . Three days later, a red lionfish was caught off the shore of Broadkill Beach which is in the Delaware Bay approximately north of Cape Henlopen State Park. Lionfish were first detected in the Bahamas in 2004. In June 2013 lionfish were discovered as far east as Barbados, and as far south as the Los Roques Archipelago and many Venezuelan continental beaches. Lionfish were first sighted in Brazilian waters in late 2014. Genetic testing on a single captured individual revealed that it was related to the populations found in the Caribbean, suggesting larval dispersal rather than an intentional release. Adult lionfish specimens are now found along the United States East Coast from Cape Hatteras, North Carolina, to Florida, and along the Gulf Coast to Texas. They are also found off Bermuda, the Bahamas, and throughout the Caribbean, including the Turks and Caicos, Haiti, Cuba, the Dominican Republic, the Cayman Islands, Aruba, Curacao, Trinidad and Tobago, Bonaire, Puerto Rico, St. Croix, Belize, Honduras, Colombia and Mexico. Population densities continue to increase in the invaded areas, resulting in a population boom of up to 700% in some areas between 2004 and 2008. Pterois species are known for devouring many other aquarium fishes, unusual in that they are among the few fish species to successfully establish populations in open marine systems. Pelagic larval dispersion is assumed to occur through oceanic currents, including the Gulf Stream and the Caribbean Current. Ballast water can also contribute to the dispersal. Extreme temperatures present geographical constraints in the distribution of aquatic species, indicating temperature tolerance plays a role in the lionfish's survival, reproduction, and range of distribution. The abrupt differences in water temperatures north and south of Cape Hatteras directly correlate with the abundance and distribution of Pterois. Pterois expanded along the southeastern coast of the United States and occupied thermal-appropriate zones within 10 years, and the shoreward expansion of this thermally appropriate habitat is expected in coming decades as winter water temperatures warm in response to anthropogenic climate change. Although the timeline of observations points to the east coast of Florida as the initial source of the western Atlantic invasion, the relationship of the United States East Coast and Bahamian lionfish invasion is uncertain. Lionfish can tolerate a minimum salinity of 5 ppt (0.5%) and even withstand pulses of fresh water, which means they can also be found in estuaries of freshwater rivers. The lionfish invasion is considered to be one of the most serious recent threats to Caribbean and Florida coral reef ecosystems. To help address the pervasive problem, in 2015, the NOAA partnered with the Gulf and Caribbean Fisheries Institute to set up a lionfish portal to provide scientifically accurate information on the invasion and its impacts. The lionfish web portal is aimed at all those involved and affected, including coastal managers, educators, and the public, and the portal was designed as a source of training videos, fact sheets, examples of management plans, and guidelines for monitoring. The web portal draws on the expertise of NOAA's own scientists, as well as that of other scientists and policy makers from academia or NGOs, and managers. Mediterranean Lionfish have also established themselves in parts of the Mediterranean - with records down to 110 m depth. Lionfish have been found in Maltese waters and waters of other Mediterranean countries, as well as Croatia. Warming sea temperatures may be allowing lionfish to further expand their range in the Mediterranean. Long-term effects of invasion Lionfish have successfully pioneered the coastal waters of the Atlantic in less than a decade, and pose a major threat to reef ecological systems in these areas. A study comparing their abundance from Florida to North Carolina with several species of groupers found they were second only to the native scamp grouper and equally abundant to the graysby, gag, and rock hind. This could be due to a surplus of resource availability resulting from the overfishing of lionfish predators like grouper. Although the lionfish has not expanded to a population size currently causing major ecological problems, their invasion in the United States coastal waters could lead to serious problems in the future. One likely ecological impact caused by Pterois could be their impact on prey population numbers by directly affecting food web relationships. This could ultimately lead to reef deterioration and could negatively influence Atlantic trophic cascade. Lionfish have already been shown to overpopulate reef areas and display aggressive tendencies, forcing native species to move to waters where conditions might be less than favorable. Lionfish could be reducing Atlantic reef diversity by up to 80%. In July 2011, lionfish were reported for the first time in the Flower Garden Banks National Marine Sanctuary off the coast of Louisiana. Sanctuary officials said they believe the species will be a permanent fixture, but hope to monitor and possibly limit their presence. Since lionfish thrive so well in the Atlantic and the Caribbean due to nutrient-rich waters and lack of predators, the species has spread tremendously. A single lionfish, located on a reef, reduced young juvenile reef fish populations by 79%. Control and eradication efforts Red lionfish are an invasive species, yet relatively little is known about them. NOAA research foci include investigating biotechnical solutions for control of the population, and understanding how the larvae are dispersed. Another important area of study is what controls the population in its native area. Researchers hope to discover what moderates lionfish populations in the Indo-Pacific and apply this information to control the invasive populations, without introducing additional invasive species. Two new trap designs have been introduced to help with deep-water control of the lionfish. The traps are low and vertical and remain open the entire time of deployment. The vertical relief of the trap attracts lionfish, which makes catching them easier. These new traps are good for catching lionfish without affecting the native species that are ecologically, recreationally, and commercially important to the surrounding areas. These traps are more beneficial than older traps because they limit the potential of catching noninvasive creatures, they have bait that is only appealing to lionfish, they guarantee a catch, and they are easy to transport. Remotely Operated Vehicles (ROVs) are being developed to help hunt the lionfish. The Reefsweeper ROV uses a harpoon gun to snag its target. The vehicle is able to hunt fish that may not otherwise be obtainable through human intervention alone. Rigorous and repeated removal of lionfish from invaded waters could potentially control the exponential expansion of the lionfish in invaded waters. A 2010 study showed effective maintenance would require the monthly harvest of at least 27% of the adult population. Because lionfish are able to reproduce monthly, this effort must be maintained throughout the entire year. To accomplish even these numbers seems unlikely, but as populations of lionfish continue to grow throughout the Caribbean and Western Atlantic, actions are being taken to attempt to control the quickly growing numbers. In November 2010, for the first time the Florida Keys National Marine Sanctuary began licensing divers to kill lionfish inside the sanctuary in an attempt to eradicate the fish. Conservation groups and community organizations in the Eastern United States have organized hunting expeditions for Pterois such as the Environment Education Foundation's 'lionfish derby' held annually in Florida. Divemasters from Cozumel to the Honduran Bay Islands and at Reef Conservation International which operates in the Sapodilla Cayes Marine Reserve off Punta Gorda, Belize, now routinely spear them during dives. While diver culling removes lionfish from shallow reefs reducing their densities, lionfish have widely been reported on mesophotic coral ecosystems (reefs from 30 to 150 m) in the western Atlantic and even in deep-sea habitats (greater than 200 m depth). Recent studies have suggested that the effects of culling are likely to be depth-specific, and so have limited impacts on these deeper reef populations. Therefore, other approaches such as trapping are advocated for removing lionfish from deeper reef habitats. Long-term culling has also been recorded to cause behavior changes in lionfish populations. For example, in the Bahamas, lionfish on heavily culled reefs have become more wary of divers and hide more within the reef structure during the day when culling occurs. Similar lionfish responses to divers have been observed when comparing culled sites and sites without culling in Honduras, including altered lionfish behaviour on reefs too deep for regular culling, but adjacent to heavily culled sites potentially implying movement of individuals between depths. While culling by marine protection agencies and volunteer divers is an important element of control efforts, development of market-based approaches, which create commercial incentives for removals, has been seen as a means to sustain control efforts. The foremost of these market approaches is the promotion of lionfish as a food item. Another is the use of lionfish spines, fins, and tails for jewelry and other decorative items. Lionfish jewelry production initiatives are underway in Belize, the Bahamas, St. Vincent, and the Grenadines. In 2014 at Jardines de la Reina National Marine Park in Cuba, a diver experimented with spearing and feeding lionfish to sharks in an effort to teach them to seek out the fish as prey. By 2016, Cuba was finding it more effective to fish for lionfish as food. "Lionfish as Food" campaign In 2010, NOAA (which also encourages people to report lionfish sightings, to help track lionfish population dispersal) began a campaign to encourage the consumption of the fish. The "Lionfish as Food" campaign encourages human hunting of the fish as the only form of control known to date. Increasing the catch of lionfish could not only help maintain a reasonable population density, but also provide an alternative fishing source to overfished populations, such as grouper and snapper. The taste is described as "buttery and tender". To promote the campaign, the Roman Catholic Church in Colombia agreed to have their clergy's sermons suggest to their parishioners (84% of the population) eating lionfish on Fridays, Lent, and Easter, which proved highly successful in decreasing the invasive fish problem. When properly filleted, the naturally venomous fish is safe to eat. Some concern exists about the risk of ciguatera food poisoning (CFP) from the consumption of lionfish, and the FDA included lionfish on the list of species at risk for CFP when lionfish are harvested in some areas tested positive for ciguatera. No cases of CFP from the consumption of lionfish have been verified, and published research has found that the toxins in lionfish venom may be causing false positives in tests for the presence of ciguatera. The Reef Environmental Education Foundation provides advice to restaurant chefs on how they can incorporate the fish into their menus. The NOAA calls the lionfish a "delicious, delicately flavored fish" similar in texture to grouper. Cooking techniques and preparations for lionfish include deep-frying, ceviche, jerky, grilling, and sashimi. Another initiative is centered around the production of leather from lionfish hides. It seeks to establish a production chain and market for high-quality leather produced from the hides. The goal is to control invasive lionfish populations while providing economic benefits to local fishing communities.
Biology and health sciences
Acanthomorpha
Animals
35245121
https://en.wikipedia.org/wiki/Zanthoxylum%20piperitum
Zanthoxylum piperitum
Zanthoxylum piperitum, also known as Japanese pepper or Japanese prickly-ash, is a deciduous aromatic spiny shrub or small tree of the citrus and rue family Rutaceae, native to Japan and Korea. It is called () in Japan and () in Korea. Both the leaves and fruits (peppercorns) are used as aromatics and flavorings in these countries. It is closely related to the Chinese Sichuan pepper, which comes from plants of the same genus. Names "Japanese pepper" Z. piperitum is called in Japan, but the corresponding cognate term in Korean, () refers to a different species, or Z. schinifolium known as or in Japan. In Korea, Z. piperitum is called (). However, in several regional dialects, notably Gyeongsang dialect, it is also called () or (). "Japanese prickly-ash" has been used as the standard American common name. Varieties The variety Z. piperitum var. inerme Makino, known in Japan as are thornless, or nearly so, and have been widely cultivated for commercial harvesting. The forma Z. piperitum f. pubsescens (Nakai) W. T. Lee, is called () in Korea, and is assigned the English name "hairy chopi". Range Its natural range spans from Hokkaido to Kyushu in Japan, southern parts of the Korean peninsula, and Chinese mainland. Description The plant belongs to the citrus and rue family, Rutaceae. The tree blooms in April to May, forming axillary flower clusters, about 5mm, and yellow-green in color. It is dioecious, and the flowers of the male plant can be consumed as hana-sanshō, while the female flowers yield berries or peppercorns of about 5mm. In autumn, these berries ripen, turning scarlet and burst, scattering the black seeds within. The branch grows pairs of sharp thorns and has odd-epinnately compound leaves, alternately arranged, with 5〜9 pairs of ovate leaflets having crenate (slightly serrated) margins. It is a host plant for the Japanese indigenous swallowtail butterfly species, the citrus butterfly Papilio xuthus, which has also spread to Hawaii. Chemical analysis has revealed that the seeds contain remarkably high concentrations of sugar-modified derivatives (glucosides) of N-methylserotonin and N,N-dimethylserotonin, also known as bufotenin. Cultivation In Japan, Wakayama Prefecture boasts 80% of domestic production. Aridagawa, Wakayama produces a specialty variety called (), which bears large fruits and clusters, rather like a bunch of grapes. The thornless variety, , derives its name from its place of origin, the Asakura district in the now defunct , integrated into Yabu, Hyōgo. Uses Culinary The Japanese pepper is closely related to the Sichuan pepper of China, and they are in the same genus. Japanese cuisine The pulverized mature fruits ("peppercorns" or "berries") known as "Japanese pepper" or () are the standard spice for sprinkling on the kabayaki-unagi (broiled eel) dish. It is also one of the seven main ingredients of the blended spice called shichimi, which also contains red chili peppers. Finely ground Japanese pepper, , is nowadays usually sold in sealed packets, and individual serving sizes are included inside heat-and-serve broiled eel packages. Young leaves and shoots, pronounced or (, ) herald the spring season, and often garnish grilled fish and soups. They have a distinctive flavor which is not to the liking of everyone. It is a customary ritual to put a leaf between cupped hands, and clap the hands with a popping sound, this supposedly serving to bring out the aroma. The young leaves are crushed and blended with miso using suribachi (mortar) to make a paste, a pesto sauce of sorts, and then used to make various aemono (tossed salad). The stereotypical main ingredient for the resultant kinome-ae is the fresh harvest of bamboo shoots, but the sauce may be tossed (or delicately "folded") into sashimi, clams, squid or other vegetable such as (angelica-tree shoots). The immature green berries are called (), and these may be blanched and salted, or simmered using soy sauce into dark-brown tsukudani, which is eaten as a condiment. The berries are also available as , which is just steeped in soy sauce. The berries are also cooked with small fry fish and flavored with soy sauce (), a specialty item of Kyoto, since its Mount Kurama outskirts is a renowned growing area of the plant. There is also a dessert named , rice cake dessert flavored with ground Japanese pepper. It is a specialty in the north. In central and northeastern Japan, there is also a non-sticky rice-cake type confection called goheimochi, which is basted with miso-based paste and grilled, sometimes using the Japanese pepper as flavor additive to the miso. Also being marketed are sansho flavored arare (rice crackers), snack foods, and sweet sansho-mochi. Korean cuisine Both the plant itself and its fruit (or peppercorn), known as (), are called by many names including (), (), (), and () in different dialects used in southern parts of Korea, where the plant is extensively cultivated and consumed. Before the introduction of chili peppers from the New World which led to the creation of the chili paste gochujang, the Koreans used a paste spiced with and black peppers. In Southern Korean cuisine, dried and ground chopi fruit is used as a condiment served with varieties of food, such as (loach soup), (spicy fish stew), and hoe (raw fish). Young leaves of the plant, called (), are used as a culinary herb or a vegetable in Southern Korean cuisine. The leaves are also eaten pickled as , pan-fried to make (pancake), or deep-fried as fritters such as and . Sometimes, chopi leaves are added to anchovy-salt mixture to make herbed fish sauce, called . Craftwork In Japan, the thick wood of the tree is traditionally made into a gnarled and rough-hewn wooden pestle (), to use with suribachi. While sansho wood are less common today, they impart subtle flavor to foods ground with them. Folk medicine Japan In Japanese pharmaceuticals, the mature husks with seeds removed are considered the crude medicine form of . It is an ingredient in , and the wine served ceremonially. The pungent taste derives from sanshool and sanshoamide. It also contains aromatic oils geraniol, dipentene, citral, etc. Fishing In southern parts of Korea, the fruit is traditionally used in fishing. Being poisonous to small fish, a few fruit dropped in a pond make the fish float shortly after.
Biology and health sciences
Herbs and spices
Plants