id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
3513378 | https://en.wikipedia.org/wiki/Velvet%20crab | Velvet crab | The velvet crab (Necora puber), also known as the velvet swimming crab, devil crab, fighter crab, or lady crab, is a species of crab from the North-East Atlantic and the Mediterranean. It is the largest of the swimming crab family (Portunidae) found in British coastal waters. It has a carapace width of up to , and is the only species in the genus Necora. Its body is coated with short hairs, giving the animal a velvety texture, hence the common name. It is one of the major crab species for United Kingdom fisheries, in spite of its relatively small size.
The velvet crab lives from southern Norway to Western Sahara in the North Sea and in North Atlantic as well as in the western parts of the Mediterranean Sea, on the rocky bottom from the shoreline to a depth of about . The last pair of pereiopods are flattened to facilitate swimming.
| Biology and health sciences | Crabs and hermit crabs | Animals |
6182210 | https://en.wikipedia.org/wiki/Populus%20tremuloides | Populus tremuloides | Populus tremuloides is a deciduous tree native to cooler areas of North America, one of several species referred to by the common name aspen. It is commonly called quaking aspen, trembling aspen, American aspen, mountain or golden aspen, trembling poplar, white poplar, and popple, as well as others. The trees have tall trunks, up to tall, with smooth pale bark, scarred with black. The glossy green leaves, dull beneath, become golden to yellow, rarely red, in autumn. The species often propagates through its roots to form large clonal groves originating from a shared root system. These roots are not rhizomes, as new growth develops from adventitious buds on the parent root system (the ortet).
Populus tremuloides is the most widely distributed tree in North America, being found from Canada to central Mexico. It is the defining species of the aspen parkland biome in the Prairie Provinces of Canada and extreme northwest Minnesota.
Description
The quaking aspen is a tall, fast-growing tree, usually at maturity, with a trunk in diameter; records are in height and in diameter. The bark is relatively smooth, whitish (light green when young), and is marked by thick black horizontal scars and prominent black knots. Parallel vertical scars are tell-tale signs of elk, which strip off aspen bark with their front teeth.
The leaves on mature trees are nearly round, in diameter with small rounded teeth, and a long flattened petiole. The leaves are green above and gray below. Young trees and root sprouts have much larger ( long), nearly triangular leaves. (Some species of Populus have petioles flattened partially along their length, while the aspens and some other poplars have them flattened from side to side along the entire length of the petiole.)
Aspens are dioecious, with separate male and female clones. The flowers are catkins long, produced in early spring before the leaves. The fruit is a pendulous string of capsules, each capsule containing about ten minute seeds embedded in cottony fluff, which aids wind dispersal of the seeds when they are mature in early summer. Trees as young as 2–3 years old may begin seed production, but significant output starts at 10 years of age. Best seed production is obtained between the ages of 50 and 70 years.
Quaking aspen grows more slowly in the dry conditions of western North America than it does in the more humid east and also lives longer—ages of 80–100 years are typical, with some individuals living 200 years; the root system can live much longer. In the east, stands decay faster, sometimes in 60 years or less depending on the region.
Name
The quaking or trembling of the leaves that is referred to in the common names is due to the flexible flattened petioles. The specific epithet, tremuloides, evokes this trembling behavior and can be literally translated as "like (Populus) tremula", the European trembling aspen.
Distribution
Quaking aspen occurs across Canada in all provinces and territories, with the possible exception of regions of Nunavut north of the James Bay islands. In the United States, it can be found as far north as the northern foothills of the Brooks Range in Alaska, where road margins and gravel pads provide islands of well-drained habitat in a region where soils are often waterlogged due to underlying permafrost. It occurs at low elevations as far south as northern Nebraska and central Indiana. In the Western United States, this tree rarely survives at elevations lower than due to hot summers experienced below that elevation, and is generally found at .
It grows at high altitudes as far south as Guanajuato, Mexico. It grows in isolated areas in northeastern Mexico as well as Baja California, Jalisco, the State of Mexico, Michoacán, Sinaloa, Sonora, and Veracruz.
Quaking aspen grows in a wide variety of climatic conditions. January and July average temperatures range from and in the Alaska Interior to and in Fort Wayne, Indiana. Average annual precipitation ranges from in Gander, Newfoundland and Labrador to as little as in the Alaska Interior. The southern limit of the species' range roughly follows the mean July isotherm.
In the sagebrush steppe, aspens occur with chokecherry, serviceberry, and hawthorn, forming a habitable haven for animal life. Shrub-like dwarf clones exist in marginal environments too cold and dry to be hospitable to full-size trees, for example at the species' upper elevation limits in the White Mountains.
Ecology
Quaking aspen propagates itself primarily through root sprouts, and extensive clonal colonies are common. Each colony is its own clone, and all trees in the clone have identical characteristics and share a single root structure. A clone may turn color earlier or later in the fall than its neighboring aspen clones. Fall colors are usually bright tones of yellow; in some areas, red blushes may be occasionally seen. As all trees in a given clonal colony are considered part of the same organism, one clonal colony, named Pando, is considered the heaviest and oldest living organism on the planet. Pando spans across 43 hectares, weighs six million kilograms, most scientists agree that the Pando seed set down sometime between 8,000 and 12,000 years ago when climate currents in the region shifted at the end of the last ice age. Aspens do produce seeds, but seldom grow from them. Pollination is inhibited by the fact that aspens are either male or female, and large stands are usually all clones of the same sex. Even if pollinated, the small seeds (three million per pound) are only viable a short time as they lack a stored food source or a protective coating.
The buds and bark supply food for snowshoe hares, moose, black bears, cottontail rabbits, porcupines, deer, grouse, and mountain beavers. The shoots are eaten by sheep, goats, and cattle. Sheep and goats also browse the foliage, as do game animals including elk. Grouse and quail especially eat the buds in winter. Mammals such as beavers and rabbits eat the bark, foliage, and buds. Beavers also store aspen logs for winter food. Other animals nest in aspen groves. The leaves of the quaking aspen and other species in the genus Populus serve as food for caterpillars of various moths and butterflies. Quaking aspen trees also serve as hosts to certain damaging insects such as the large aspen tortrix.
Dieback
Increased mortality in trembling aspen stands have been reported since the early 1990s across North America. As this accelerated in 2004, a debate over causes began. This increased dieback has been linked to multiple stressors, such as defoliation by the forest tent caterpillar (Malacosoma disstria), wood-boring beetles such as the poplar borer (Saperda calcarata) and the bronze poplar borer (Agrilus liragus), and fungal disturbances such as those by the Cytospora canker (Valsa sordida).
Many areas of the Western US have experienced increased diebacks which are often attributed to ungulate grazing and wildfire suppression. At high altitudes where grasses can be rare, ungulates can browse young aspen sprouts and prevent those young trees from reaching maturity. As a result, some aspen groves close to cattle or other grazing animals, such as deer or elk, have very few young trees and can be invaded by conifers, which are not typically browsed. Another possible deterrent to aspen regeneration is widespread wildfire suppression. Aspens are vigorous resprouters and even though the above-ground portion of the organism may die in a wild-fire, the roots, which are often protected from lethal temperatures during a fire, will sprout new trees soon after a fire. Disturbances such as fires seem to be a necessary ecological event in order for aspens to compete with conifers, which tend to replace aspens over long, disturbance-free intervals. The current dieback in the American West may have roots in the strict fire suppression policy in the United States. On the other hand, the widespread decimation of conifer forests by the mountain pine beetle may provide increased opportunities for aspen groves to proliferate under the right conditions.
Increased mortality has also been linked in turn to climate change. Thaw-freeze events and light snowfall in late winter as a result of increased temperatures has led to increased dieback in Southern and Western Canada. Furthermore, climate records show that historically, most periods of aspen decline have been paired with periods of severe drought, which has worsened in recent years due to a changing climate. Many stands of aspen that have been affected by climate change in recent years have poor regeneration potential, leading to concerns of widespread loss of aspen cover in the future.
Because of vegetative regeneration by aspen, where an entire group of trees are essentially clones, there is a concern that something that hits one will eventually kill all of the trees, presuming they share the same vulnerability. A conference was held in Utah in September 2006 to share notes and consider investigative methodology.
Uses
Aspen bark contains a substance that was extracted by indigenous North Americans and European settlers of the western U.S. as a quinine substitute.
Like other poplars, aspens make poor fuel wood, as they dry slowly, rot quickly, and do not give off much heat. Yet they are still widely used in campgrounds because they are cheap and plentiful and not widely used in building lumber. Pioneers in the North American west used them to create log cabins and dugouts, though they were not the preferred species.
Aspen wood is used for pulp products (its main application in Canada) such as books, newsprint, and fine printing paper. It is especially good for panel products such as oriented strand board and waferboard. It is light in weight and is used for furniture, boxes and crates, core stock in plywood, and wall panels.
Culture
The quaking aspen is the state tree of Utah.
| Biology and health sciences | Malpighiales | Plants |
6186026 | https://en.wikipedia.org/wiki/Alloy%20steel | Alloy steel | Alloy steel is steel that is alloyed with a variety of elements in amounts between 1.0% and 50% by weight, typically to improve its mechanical properties.
Types
Alloy steels divide into two groups: low and high alloy. The boundary between the two is disputed. Smith and Hashemi define the difference at 4.0%, while Degarmo, et al., define it at 8.0%. Most alloy steels are low-alloy.
The simplest steels are iron (Fe) alloyed with (0.1% to 1%) carbon (C) and nothing else (excepting slight impurities); these are called carbon steels. However, alloy steel encompasses steels with additional (metal) alloying elements. Common alloyants include manganese (Mn) (the most common), nickel (Ni), chromium (Cr), molybdenum (Mo), vanadium (V), silicon (Si), and boron (B). Less common alloyants include aluminum (Al), cobalt (Co), copper (Cu), cerium (Ce), niobium (Nb), titanium (Ti), tungsten (W), tin (Sn), zinc (Zn), lead (Pb), and zirconium (Zr).
Properties
Alloy steels variously improve strength, hardness, toughness, wear resistance, corrosion resistance, hardenability, and hot hardness. To achieve these improved properties the metal may require specific heat treating, combined with strict cooling protocols.
Although alloy steels have been made for centuries, their metallurgy was not well understood until the advancing chemical science of the nineteenth century revealed their compositions. Alloy steels from earlier times were expensive luxuries made on the model of "secret recipes" and forged into tools such as knives and swords. Machine age alloy steels were tool steels and stainless steels.
Because of iron's ferromagnetic properties, some alloys find important applications where their responses to magnetism are valued, including in electric motors and in transformers.
Low-alloy steels
Material science
Alloying elements enable specific properties. As a guideline, alloying elements are added in lower percentages (less than 5%) to increase strength or hardenability, or in larger percentages (over 5%) to improve corrosion resistance or temperature stability.
The alloying elements tend to form either solid solutions, compounds or carbides.
Nickel is soluble in ferrite; therefore, it usually forms Ni3Al.
Aluminum dissolves in ferrite and forms Al2O3 and AlN. Silicon is also soluble and usually forms SiO2•MxOy.
Manganese mostly dissolves in ferrite forming MnS, MnO•SiO2, but also carbides: (Fe,Mn)3C.
Chromium forms partitions between the ferrite and carbide phases in steel, forming (Fe,Cr3)C, Cr7C3, and Cr23C6. The type of c#arbide that chromium forms depends on the amount of carbon and other alloying elements present.
Tungsten and molybdenum form carbides given enough carbon and an absence of stronger carbide forming elements (i.e., titanium and niobium), they form the carbides W2C and Mo2C, respectively.
Vanadium, titanium, and niobium are strong carbide-forming elements, forming vanadium carbide, titanium carbide, and niobium carbide, respectively.
Eutectoid temperature
Alloying elements can have an effect on the eutectoid temperature.
Manganese and nickel lower the eutectoid temperature and are known as austenite stabilizing elements. With enough of these elements the austenitic structure may form at room temperature.
Carbide-forming elements raise the eutectoid temperature and stabilize ferrites.
Microstructure
The properties of steel depend on its microstructure: the arrangement of different phases, some harder, some with greater ductility. At the atomic level, the four phases of auto steel include martensite (the hardest yet most brittle), bainite (less hard), ferrite (more ductile), and austenite (the most ductile). The phases are arranged by steelmakers by manipulating intervals (sometimes by seconds only) and temperatures of the heating and cooling process.
Transformation-induced plasticity
TRIP steels transform from relatively ductile to relatively hard under deformation such as in a car crash. Deformation transforms austenitic microstructure to martensitic microstructure. TRIP steels use relatively high carbon content to create the austenitic microstructure. Relatively high silicon/aluminum content suppresses carbide precipitation in the bainite region and helps accelerate ferrite/bainite formation. This helps retain carbon to support austenite at room temperature. A specific cooling process reduces the austenite/martensite transformation during forming. TRIP steels typically require an isothermal hold at an intermediate temperature during cooling, which produces some bainite. The additional silicon/carbon requires weld cycle modification, such as the use of pulsating welding or dilution welding.
In one approach steel is heated to a high temperature, cooled somewhat, held stable for an interval and then quenched. This produces islands of austenite surrounded by a matrix of softer ferrite, with regions of harder bainite and martensite. The resulting product can absorb energy without fracturing, making it useful for auto parts such as bumpers and pillars.
Three generations of advanced, high-strength steel are available. The first was created in the 1990s, increasing strength and ductility. A second generation used new alloys to further increase ductility, but were expensive and difficult to manufacture. The third generation is emerging. Refined heating and cooling patterns increase strength at some cost in ductility (vs 2nd generation). These steels are claimed to approach nearly ten times the strength of earlier steels; and are much cheaper to manufacture.
Intermetallics
Researches created an alloy with the strength of steel and the lightness of titanium alloy. It combined iron, aluminum, carbon, manganese, and nickel. The other ingredient was uniformly distributed nanometer-sized B2 intermetallic (two metals with equal numbers of atoms) particles. The use of nickel team avoided problems with earlier attempts to use B2, while increasing ductility.
| Physical sciences | Iron alloys | Chemistry |
1848556 | https://en.wikipedia.org/wiki/Giraffe%20weevil | Giraffe weevil | The giraffe weevil (Trachelophorus giraffa) is a species of small weevil endemic to Madagascar. They are black-bodied and have bright red elytra covering their wings. Giraffe weevils are known for their elongated necks, with the males having necks 2 to 3 times the size of their female counterparts. There are several advantages to their elongated necks, including using them for combat, attracting mates, building nests, and acquiring resources. In the field of coleopterology, giraffe weevils are of interest because they exhibit sexual dimorphism. There are other beetle species that share the common name giraffe weevil, like the New Zealand giraffe weevil Lasiorhynchus barbicornis.
Diet and lifestyle
Giraffe weevils spend the entirety of their lives on trees in the Madagascar forests. As such their diet mainly consists of the leaves of the trees they dwell in.
Predators
Research has not identified any predators that specifically target giraffe weevils. Common predators in the Madagascar forest that prey upon beetles and their larvae in general are birds and small mammals like lemurs and fossa.
Reproduction
In order to attract a mating partner male giraffe weevils have been known to perform elaborate displays involving the swaying of their necks, showcasing their vibrant colors. The female giraffe weevil then evaluates the dance and if she approves, the male will have the opportunity to engage in courtship.
While there has not been any quantitative research regarding the reproductive habits of giraffe weevils, certain observations have shed light on some unique behaviors. Females may roll up a leaf and lay a single egg inside the leaf tube, snipping it off to fall onto the forest floor. The leaf then provides the larva with food in its first days of life.
Sexual dimorphism
Giraffe weevils exhibit sexual dimorphism. Male giraffe weevils have elongated necks that females lack.
Culture
Many cultures use beetles as a form of art and expression. The giraffe weevil can be seen in Madagascar communities being sold and used as decor or jewelry. The price for a giraffe weevil in Madagascar in 2021 was around 10 USD.
| Biology and health sciences | Beetles (Coleoptera) | Animals |
1850694 | https://en.wikipedia.org/wiki/Drawing%20%28manufacturing%29 | Drawing (manufacturing) | Drawing is a manufacturing process that uses tensile forces to elongate metal, glass, or plastic. As the material is drawn (pulled), it stretches and becomes thinner, achieving a desired shape and thickness. Drawing is classified into two types: sheet metal drawing and wire, bar, and tube drawing. Sheet metal drawing is defined as a plastic deformation over a curved axis. For wire, bar, and tube drawing, the starting stock is drawn through a die to reduce its diameter and increase its length. Drawing is usually performed at room temperature, thus classified as a cold working process; however, drawing may also be performed at higher temperatures to hot work large wires, rods, or hollow tubes in order to reduce forces.
Drawing differs from rolling in that pressure is not applied by the turning action of a mill but instead depends on force applied locally near the area of compression. This means the maximal drawing force is limited by the tensile strength of the material, a fact particularly evident when drawing thin wires.
The starting point of cold drawing is hot-rolled stock of a suitable size.
Metal
Successful drawing depends on the flow and stretch of the material. Steels, copper alloys, and aluminium alloys are commonly drawn metals.
In sheet metal drawing, as a die forms a shape from a flat sheet of metal (the "blank"), the material is forced to move and conform to the die. The flow of material is controlled through pressure applied to the blank and lubrication applied to the die or the blank. If the form moves too easily, wrinkles will occur in the part. To correct this, more pressure or less lubrication is applied to the blank to limit the flow of material and cause the material to stretch or set thin. If too much pressure is applied, the part will become too thin and break. Drawing metal requires finding the correct balance between wrinkles and breaking to achieve a successful part.
Sheet metal drawing becomes deep drawing when the workpiece is longer than its diameter. It is common that the workpiece is also processed using other forming processes, such as piercing, ironing, necking, rolling, and beading. In shallow drawing, the depth of drawing is less than the smallest dimension of the hole.
Bar, tube, and wire drawing all work upon the same principle: the starting stock is drawn through a die to reduce its diameter and increase its length. Usually, the die is mounted on a draw bench. The starting end of the workpiece is narrowed or pointed to get the end through the die. The end is then placed in grips which pull the rest of the workpiece through the die.
Drawing can also be used to cold form a shaped cross-section. Cold drawn cross-sections are more precise and have a better surface finish than hot extruded parts. Inexpensive materials can be used instead of expensive alloys for strength requirements, due to work hardening. Bars or rods that are drawn cannot be coiled; therefore, straight-pull draw benches are used. Chain drives are used to draw workpieces up to . Hydraulic cylinders are used for shorter length workpieces. The reduction in area is usually restricted to between 20% and 50%, because greater reductions would exceed the tensile strength of the material, depending on its ductility. To achieve a certain size or shape, multiple passes through progressively smaller dies and intermediate anneals may be required. Tube drawing is very similar to bar drawing, except the beginning stock is a tube. It is used to decrease the diameter, improve surface finish, and improve dimensional accuracy. A mandrel may or may not be used depending on the specific process used. A floating plug may also be inserted into the inside diameter of the tube to control the wall thickness. Wire drawing has long been used to produce flexible metal wire by drawing the material through a series of dies of decreasing size. These dies are manufactured from a number of materials, the most common being tungsten carbide and diamond.
The cold drawing process for steel bars and wire is as follows:
Tube lubrication: The surface of the bar or tube is coated with a drawing lubricant such as phosphate or oil to aid cold drawing.
Push Pointing: Several inches of the lead ends of the bar or tube are reduced in size by swaging or extruding so that it can pass freely through the drawing die. This is done because the die opening is always smaller in size than the original bar or coil section.
Cold drawing, process drawing: In this process, the material is drawn at room temperature. The reduced end of the bar or coil, which is smaller than the die opening, is passed through the die where it enters a gripping device of the drawing machine. The drawing machine pulls ("draws") the remaining unreduced section of the bar or coil through the die. The die reduces the cross section of the bar or coil, shapes its profile, and increases its length.
Finished product: The drawn product, which is referred to as "cold drawn" or "cold finished", exhibits a bright or polished finish, increased mechanical properties, improved machining characteristics, and precise and uniform dimensional tolerances.
Multi-pass drawing: The cold drawing of complex shapes or profiles may involve the workpiece being drawn multiple times through progressively smaller die openings in order to produce the desired shape and tolerances. Material is generally annealed between each drawing pass to increase its ductility and remove internal stresses produced during the cold working.
Annealing: This is a thermal treatment generally used to soften the material being drawn; to modify the microstructure, the mechanical properties, and the machining characteristics of the steel; and to remove internal stresses in the product. Depending on the material and desired final characteristics, annealing may be used before, during (between passes), or after the cold drawing operation.
Glass
Similar drawing processes are applied in glassblowing and in making glass optical fiber.
Plastics
Plastic drawing, sometimes referred to as cold drawing, is the same process as used on metal bars, applied to plastics. Plastic drawing is primarily used in manufacturing plastic fibers. The process was discovered by Julian W. Hill in 1930 while trying to make fibers from an early polyester.
It is performed after the material has been "spun" into filaments; by extruding the polymer melt through pores of a spinneret. During this process, the individual polymer chains tend to somewhat align because of viscous flow. These filaments still have an amorphous structure, so they are drawn to align the fibers further, thus increasing crystallinity, tensile strength, and stiffness. This is done on a draw twister machine. For nylon, the fiber is stretched to four times its spun length. The crystals formed during drawing are held together by hydrogen bonds between the amide hydrogens of one chain and the carbonyl oxygens of another chain. Polyethylene terephthalate (PET) sheet is drawn in two dimensions to make BoPET (biaxially-oriented polyethylene terephthalate) with improved mechanical properties.
| Technology | Metallurgy | null |
1851173 | https://en.wikipedia.org/wiki/Hercules%20beetle | Hercules beetle | The Hercules beetle (Dynastes hercules) is a species of rhinoceros beetle native to the rainforests of southern Mexico, Central America, South America, and the Lesser Antilles. It is the longest extant species of beetle in the world, and is also one of the largest flying insects in the world.
Etymology
Dynastes hercules is known for its tremendous strength and is named after Hercules, a hero of classical mythology who is famed for his great strength.
Taxonomy
D. hercules has a complex taxonomic history and has been known by several synonyms. It is in the subfamily Dynastinae (rhinoceros beetles) in the larger family Scarabaeidae (commonly known as scarab beetles). Not counting subspecies of D. hercules, seven other species are recognized in the genus Dynastes.
Subspecies
Several subspecies of D. hercules have been named, though still some uncertainty exists as to the validity of the named taxa.
Dynastes hercules ecuatorianus Ohaus, 1913
Dynastes hercules hercules (Linnaeus, 1758)
Dynastes hercules lichyi Lachaume, 1985
Dynastes hercules morishimai Nagai, 2002
Dynastes hercules occidentalis Lachaume, 1985
Dynastes hercules paschoali Grossi & Arnaud, 1993
Dynastes hercules reidi Chalumeau, 1977 (= baudrii Pinchon, 1976)
Dynastes hercules septentrionalis Lachaume, 1985 (= tuxtlaensis Moron, 1993)
Dynastes hercules takakuwai Nagai, 2002
Dynastes hercules trinidadensis Chalumeau & Reid, 1995 (= bleuzeni Silvestre and Dechambre, 1995)
Description
Adult body sizes (not including the thoracic horn) vary between in length and in width. Male Hercules beetles may reach up to in length (including the horn), making them the longest species of beetle in the world, if jaws and/or horns are included in the measurement. The size of the horn is naturally variable, more so than any variation of the size of legs, wings, or overall body size in the species. This variability results from developmental mechanisms that coincide with genetic predisposition in relation to nutrition, stress, exposure to parasites, and/or physiological conditions.
Dynastes hercules is highly sexually dimorphic, with only males exhibiting the characteristic horns (one on the head, and a much larger one on the prothorax). The body of males is black with the exception of the elytra, which can have shades of olive-green. They have a black suture with sparsely distributed black spots elsewhere on the elytra. They have a slightly iridescent coloration to their elytra, which varies in color between specimens and may be affected by the humidity of the local environment in which they develop. At low humidity the elytra are olive-green or yellow in color, but darken to black at higher humidity due to its hygrochromic properties.
Females of D. hercules have punctured elytra which are usually entirely black, but sometimes have the last quarter of the elytra colored in the same way as the males.
Distribution and habitat
Populations of D. hercules may be found from southern Mexico to Bolivia in mountainous and lowland rain forests. Known populations include the Lesser Antilles, Trinidad and Tobago, Brazil, Ecuador, Colombia, and Peru. Chromosomal analysis has shown that the genus Dynastes originated from South America.
Life cycle
Not much is known about the life cycle in the wild, but much evidence has been gained through observations of captive-bred populations. The mating season for adults typically occurs during the rainy season (July to December). Females have an average gestation period of 30 days from copulation to egg-laying, and may lay up to 100 eggs on the ground or on dead wood. The eggs have an incubation period of approximately 27.7 days before they hatch. Once hatched, the larval stage of the Hercules beetle may last up to two years in duration, where it will go through 3 metamorphosis stages, also known as instars. The larvae have a yellow body with a black head. The larvae can grow up to in length and weigh more than 100 grams. In laboratory conditions at 25 ± 1°C, the first instar stage lasts an average of 50 days, the second stage an average of 56 days, and the third an average of 450 days. After the third instar stage, the pupal stage lasts about 32 days, where it will transition into an adult. Adult beetles can live for three to six months in captivity.
Diet and behavior
Diet
The larvae of the Hercules beetle are saproxylophagous, meaning that they feed on rotting wood; they reside in same during their two-year developmental stage. The adult Hercules beetle feeds on fresh and rotting fruit, along with tree sap. Adults carve bark through the use of its synchronous mandibles to easily access the sap of trees. When these mandibles are closed, a narrow opening is formed which can act like a straw to allow consumption of tree sap. They have been observed feeding on peaches, pears, apples, grapes, bananas, and mangoes in captivity.
Behavior
Within their native rain forest habitats, the adult beetles, which are nocturnal, forage for fruit at night and hide or burrow within the leaf litter during the day. The adult D. hercules beetles are capable of creating a 'huffing' sound, generated by stridulating their abdomen against their elytra to serve as a warning to predators. Like most insects, communication within the species is a mix of chemoreception, sight, and mechanical perception. Experiments on D. hercules have shown that a male placed in the vicinity of a female will immediately orient towards her and seek her out, suggesting chemical communication through strong sexual pheromones.
Combat behavior between males
It has been observed in wild habitat and in captivity that male D. hercules will engage in combat to win possession and mating rights to a female. Male Hercules beetles typically use their large horns to settle mating disputes; these fights can cause significant physical damage to the combatants but may also include possible damage to the female in the process. During fights, the males attempt to grab and pin their rival between the cephalic and thoracic horns to lift and throw them. The successful male wins mating rights with the female, though the beetles remain polygynandrous.
Physical strength
Reports suggest the Hercules beetle is able to carry up to 850 times its body mass. Actual measurements on a much smaller (and relatively stronger: see square-cube law) species of rhinoceros beetle show a carrying capacity only up to 100 times their body mass, at which point they can barely move.
Relationship to humans
Dynastes hercules does not negatively affect human activities, either as an agricultural pest or disease vector. The beetles can be kept as pets. Larvae excrement has been shown to contain β-mannanase, a bacterially synthesized enzyme that hydrolyzes hemicellulose that can be used in enzyme based cleaning products. β-mannanase has been successfully extracted and cloned from larvae fecal matter, suggesting that production of bio additive cleaning products may be feasible.
Relationship to the environment
Dynastes hercules is a beneficial contributor to the rain forest ecosystem, primarily during their larval stage where they are saproxylophagous. Feeding on rotting wood assists with biodegradation and cycling nutrients in the environment.
| Biology and health sciences | Beetles (Coleoptera) | Animals |
252814 | https://en.wikipedia.org/wiki/HSL%20and%20HSV | HSL and HSV | HSL and HSV are the two most common cylindrical-coordinate representations of points in an RGB color model. The two representations rearrange the geometry of RGB in an attempt to be more intuitive and perceptually relevant than the cartesian (cube) representation. Developed in the 1970s for computer graphics applications, HSL and HSV are used today in color pickers, in image editing software, and less commonly in image analysis and computer vision.
HSL stands for hue, saturation, and lightness, and is often also called HLS. HSV stands for hue, saturation, and value, and is also often called HSB (B for brightness). A third model, common in computer vision applications, is HSI, for hue, saturation, and intensity. However, while typically consistent, these definitions are not standardized, and any of these abbreviations might be used for any of these three or several other related cylindrical models. (For technical definitions of these terms, see below.)
In each cylinder, the angle around the central vertical axis corresponds to "hue", the distance from the axis corresponds to "saturation", and the distance along the axis corresponds to "lightness", "value" or "brightness". Note that while "hue" in HSL and HSV refers to the same attribute, their definitions of "saturation" differ dramatically. Because HSL and HSV are simple transformations of device-dependent RGB models, the physical colors they define depend on the colors of the red, green, and blue primaries of the device or of the particular RGB space, and on the gamma correction used to represent the amounts of those primaries. Each unique RGB device therefore has unique HSL and HSV spaces to accompany it, and numerical HSL or HSV values describe a different color for each basis RGB space.
Both of these representations are used widely in computer graphics, and one or the other of them is often more convenient than RGB, but both are also criticized for not adequately separating color-making attributes, or for their lack of perceptual uniformity. Other more computationally intensive models, such as CIELAB or CIECAM02 are said to better achieve these goals.
Basic principle
HSL and HSV are both cylindrical geometries (), with hue, their angular dimension, starting at the red primary at 0°, passing through the green primary at 120° and the blue primary at 240°, and then wrapping back to red at 360°. In each geometry, the central vertical axis comprises the neutral, achromatic, or gray colors ranging, from top to bottom, white at lightness 1 (value 1) to black at lightness 0 (value 0).
In both geometries, the additive primary and secondary colors – red, yellow, green, cyan, blue and magenta – and linear mixtures between adjacent pairs of them, sometimes called pure colors, are arranged around the outside edge of the cylinder with saturation 1. These saturated colors have lightness 0.5 in HSL, while in HSV they have value 1. Mixing these pure colors with black – producing so-called shades – leaves saturation unchanged. In HSL, saturation is also unchanged by tinting with white, and only mixtures with both black and white – called tones – have saturation less than 1. In HSV, tinting alone reduces saturation.
Because these definitions of saturation – in which very dark (in both models) or very light (in HSL) near-neutral colors are considered fully saturated (for instance, from the bottom right in the sliced HSL cylinder or from the top right) – conflict with the intuitive notion of color purity, often a conic or biconic solid is drawn instead (), with what this article calls chroma as its radial dimension (equal to the range of the RGB values), instead of saturation (where the saturation is equal to the chroma over the maximum chroma in that slice of the (bi)cone). Confusingly, such diagrams usually label this radial dimension "saturation", blurring or erasing the distinction between saturation and chroma. As described below, computing chroma is a helpful step in the derivation of each model. Because such an intermediate model – with dimensions hue, chroma, and HSV value or HSL lightness – takes the shape of a cone or bicone, HSV is often called the "hexcone model" while HSL is often called the "bi-hexcone model" ().
Motivation
Most televisions, computer displays, and projectors produce colors by combining red, green, and blue light in varying intensities – the so-called RGB additive primary colors. The resulting mixtures in RGB color space can reproduce a wide variety of colors (called a gamut); however, the relationship between the constituent amounts of red, green, and blue light and the resulting color is unintuitive, especially for inexperienced users, and for users familiar with subtractive color mixing of paints or traditional artists' models based on tints and shades (). Furthermore, neither additive nor subtractive color models define color relationships the same way the human eye does.
For example, imagine we have an RGB display whose color is controlled by three sliders ranging from , one controlling the intensity of each of the red, green, and blue primaries. If we begin with a relatively colorful orange , with sRGB values , , , and want to reduce its colorfulness by half to a less saturated orange , we would need to drag the sliders to decrease R by 31, increase G by 24, and increase B by 59, as pictured below.
Beginning in the 1950s, color television broadcasts used a compatible color system whereby "luminance" and "chrominance" signals were encoded separately, so that existing unmodified black-and-white televisions could still receive color broadcasts and show a monochrome image.
In an attempt to accommodate more traditional and intuitive color mixing models, computer graphics pioneers at PARC and NYIT introduced the HSV model for computer display technology in the mid-1970s, formally described by Alvy Ray Smith in the August 1978 issue of Computer Graphics. In the same issue, Joblove and Greenberg described the HSL model – whose dimensions they labeled hue, relative chroma, and intensity – and compared it to HSV (). Their model was based more upon how colors are organized and conceptualized in human vision in terms of other color-making attributes, such as hue, lightness, and chroma; as well as upon traditional color mixing methods – e.g., in painting – that involve mixing brightly colored pigments with black or white to achieve lighter, darker, or less colorful colors.
The following year, 1979, at SIGGRAPH, Tektronix introduced graphics terminals using HSL for color designation, and the Computer Graphics Standards Committee recommended it in their annual status report (). These models were useful not only because they were more intuitive than raw RGB values, but also because the conversions to and from RGB were extremely fast to compute: they could run in real time on the hardware of the 1970s. Consequently, these models and similar ones have become ubiquitous throughout image editing and graphics software since then. Some of their uses are described below.
Formal derivation
Color-making attributes
The dimensions of the HSL and HSV geometries – simple transformations of the not-perceptually-based RGB model – are not directly related to the photometric color-making attributes of the same names, as defined by scientists such as the CIE or ASTM. Nonetheless, it is worth reviewing those definitions before leaping into the derivation of our models. For the definitions of color-making attributes which follow, see:
Hue The "attribute of a visual sensation according to which an area appears to be similar to one of the perceived colors: red, yellow, green, and blue, or to a combination of two of them".
Radiance (Le,Ω) The radiant power of light passing through a particular surface per unit solid angle per unit projected area, measured in SI units in watt per steradian per square metre ().
Luminance (Y or Lv,Ω) The radiance weighted by the effect of each wavelength on a typical human observer, measured in SI units in candela per square meter (). Often the term luminance is used for the relative luminance, Y/Yn, where Yn is the luminance of the reference white point.
Luma () The weighted sum of gamma-corrected , , and values, and used in , for JPEG compression and video transmission.
Brightness (or value) The "attribute of a visual sensation according to which an area appears to emit more or less light".
Lightness The "brightness relative to the brightness of a similarly illuminated white".
Colorfulness The "attribute of a visual sensation according to which the perceived color of an area appears to be more or less chromatic".
Chroma The "colorfulness relative to the brightness of a similarly illuminated white".
Saturation The "colorfulness of a stimulus relative to its own brightness".
Brightness and colorfulness are absolute measures, which usually describe the spectral distribution of light entering the eye, while lightness and chroma are measured relative to some white point, and are thus often used for descriptions of surface colors, remaining roughly constant even as brightness and colorfulness change with different illumination. Saturation can be defined as either the ratio of colorfulness to brightness, or that of chroma to lightness.
General approach
HSL, HSV, and related models can be derived via geometric strategies, or can be thought of as specific instances of a "generalized LHS model". The HSL and HSV model-builders took an RGB cube – with constituent amounts of red, green, and blue light in a color denoted – and tilted it on its corner, so that black rested at the origin with white directly above it along the vertical axis, then measured the hue of the colors in the cube by their angle around that axis, starting with red at 0°. Then they came up with a characterization of brightness/value/lightness, and defined saturation to range from 0 along the axis to 1 at the most colorful point for each pair of other parameters.
Hue and chroma
In each of our models, we calculate both hue and what this article will call chroma, after Joblove and Greenberg (1978), in the same way – that is, the hue of a color has the same numerical values in all of these models, as does its chroma. If we take our tilted RGB cube, and project it onto the "chromaticity plane" perpendicular to the neutral axis, our projection takes the shape of a hexagon, with red, yellow, green, cyan, blue, and magenta at its corners (). Hue is roughly the angle of the vector to a point in the projection, with red at 0°, while chroma is roughly the distance of the point from the origin.
More precisely, both hue and chroma in this model are defined with respect to the hexagonal shape of the projection. The chroma is the proportion of the distance from the origin to the edge of the hexagon. In the lower part of the adjacent diagram, this is the ratio of lengths , or alternatively the ratio of the radii of the two hexagons. This ratio is the difference between the largest and smallest values among R, G, or B in a color. To make our definitions easier to write, we'll define these maximum, minimum, and chroma component values as M, m, and C, respectively.
To understand why chroma can be written as , notice that any neutral color, with , projects onto the origin and so has 0 chroma. Thus if we add or subtract the same amount from all three of R, G, and B, we move vertically within our tilted cube, and do not change the projection. Therefore, any two colors of and project on the same point, and have the same chroma. The chroma of a color with one of its components equal to zero is simply the maximum of the other two components. This chroma is M in the particular case of a color with a zero component, and in general.
The hue is the proportion of the distance around the edge of the hexagon which passes through the projected point, originally measured on the range but now typically measured in degrees . For points which project onto the origin in the chromaticity plane (i.e., grays), hue is undefined. Mathematically, this definition of hue is written piecewise:
Sometimes, neutral colors (i.e. with ) are assigned a hue of 0° for convenience of representation.
These definitions amount to a geometric warping of hexagons into circles: each side of the hexagon is mapped linearly onto a 60° arc of the circle (). After such a transformation, hue is precisely the angle around the origin and chroma the distance from the origin: the angle and magnitude of the vector pointing to a color.
Sometimes for image analysis applications, this hexagon-to-circle transformation is skipped, and hue and chroma (we'll denote these H2 and C2) are defined by the usual cartesian-to-polar coordinate transformations (). The easiest way to derive those is via a pair of cartesian chromaticity coordinates which we'll call α and β:
(The atan2 function, a "two-argument arctangent", computes the angle from a cartesian coordinate pair.)
Notice that these two definitions of hue (H and H2) nearly coincide, with a maximum difference between them for any color of about 1.12° – which occurs at twelve particular hues, for instance , – and with for every multiple of 30°. The two definitions of chroma (C and C2) differ more substantially: they are equal at the corners of our hexagon, but at points halfway between two corners, such as , we have , but a difference of about 13.4%.
Lightness
While the definition of hue is relatively uncontroversial – it roughly satisfies the criterion that colors of the same perceived hue should have the same numerical hue – the definition of a lightness or value dimension is less obvious: there are several possibilities depending on the purpose and goals of the representation. Here are four of the most common (; three of these are also shown in ):
The simplest definition is just the arithmetic mean, i.e. average, of the three components, in the HSI model called intensity (). This is simply the projection of a point onto the neutral axis – the vertical height of a point in our tilted cube. The advantage is that, together with Euclidean-distance calculations of hue and chroma, this representation preserves distances and angles from the geometry of the RGB cube.
In the HSV "hexcone" model, value is defined as the largest component of a color, our M above (). This places all three primaries, and also all of the "secondary colors" – cyan, yellow, and magenta – into a plane with white, forming a hexagonal pyramid out of the RGB cube.
In the HSL "bi-hexcone" model, lightness is defined as the average of the largest and smallest color components (), i.e. the mid-range of the RGB components. This definition also puts the primary and secondary colors into a plane, but a plane passing halfway between white and black. The resulting color solid is a double-cone similar to Ostwald's, shown above.
A more perceptually relevant alternative is to use luma, , as a lightness dimension (). Luma is the weighted average of gamma-corrected R, G, and B, based on their contribution to perceived lightness, long used as the monochromatic dimension in color television broadcast. For sRGB, the Rec. 709 primaries yield , digital NTSC uses according to Rec. 601 and some other primaries are also in use which result in different coefficients.
(SDTV)
(Adobe)
(HDTV)
(UHDTV, HDR)
All four of these leave the neutral axis alone. That is, for colors with , any of the four formulations yields a lightness equal to the value of R, G, or B.
For a graphical comparison, see fig. 13 below.
Saturation
When encoding colors in a hue/lightness/chroma or hue/value/chroma model (using the definitions from the previous two sections), not all combinations of lightness (or value) and chroma are meaningful: that is, half of the colors denotable using , , and fall outside the RGB gamut (the gray parts of the slices in figure 14). The creators of these models considered this a problem for some uses. For example, in a color selection interface with two of the dimensions in a rectangle and the third on a slider, half of that rectangle is made of unused space. Now imagine we have a slider for lightness: the user's intent when adjusting this slider is potentially ambiguous: how should the software deal with out-of-gamut colors? Or conversely, If the user has selected as colorful as possible a dark purple and then shifts the lightness slider upward, what should be done: would the user prefer to see a lighter purple still as colorful as possible for the given hue and lightness or a lighter purple of exactly the same chroma as the original color
To solve problems such as these, the HSL and HSV models scale the chroma so that it always fits into the range for every combination of hue and lightness or value, calling the new attribute saturation in both cases (fig. 14). To calculate either, simply divide the chroma by the maximum chroma for that value or lightness.
The HSI model commonly used for computer vision, which takes H2 as a hue dimension and the component average I ("intensity") as a lightness dimension, does not attempt to "fill" a cylinder by its definition of saturation. Instead of presenting color choice or modification interfaces to end users, the goal of HSI is to facilitate separation of shapes in an image. Saturation is therefore defined in line with the psychometric definition: chroma relative to lightness (). See the Use in image analysis section of this article.
Using the same name for these three different definitions of saturation leads to some confusion, as the three attributes describe substantially different color relationships; in HSV and HSI, the term roughly matches the psychometric definition, of a chroma of a color relative to its own lightness, but in HSL it does not come close. Even worse, the word saturation is also often used for one of the measurements we call chroma above (C or C2).
Examples
All parameter values shown below are given as values in the interval , except those for H and H2, which are in the interval .
Use in end-user software
The original purpose of HSL and HSV and similar models, and their most common current application, is in color selection tools. At their simplest, some such color pickers provide three sliders, one for each attribute. Most, however, show a two-dimensional slice through the model, along with a slider controlling which particular slice is shown. The latter type of GUI exhibits great variety, because of the choice of cylinders, hexagonal prisms, or cones/bicones that the models suggest (see the diagram near the top of the page). Several color choosers from the 1990s are shown to the right, most of which have remained nearly unchanged in the intervening time: today, nearly every computer color chooser uses HSL or HSV, at least as an option. Some more sophisticated variants are designed for choosing whole sets of colors, basing their suggestions of compatible colors on the HSL or HSV relationships between them.
Most web applications needing color selection also base their tools on HSL or HSV, and pre-packaged open source color choosers exist for most major web front-end frameworks. The CSS 3 specification allows web authors to specify colors for their pages directly with HSL coordinates.
HSL and HSV are sometimes used to define gradients for data visualization, as in maps or medical images. For example, the popular GIS program ArcGIS historically applied customizable HSV-based gradients to numerical geographical data.
Image editing software also commonly includes tools for adjusting colors with reference to HSL or HSV coordinates, or to coordinates in a model based on the "intensity" or luma defined above. In particular, tools with a pair of "hue" and "saturation" sliders are commonplace, dating to at least the late-1980s, but various more complicated color tools have also been implemented. For instance, the Unix image viewer and color editor xv allowed six user-definable hue (H) ranges to be rotated and resized, included a dial-like control for saturation (SHSV), and a curves-like interface for controlling value (V) – see fig. 17. The image editor Picture Window Pro includes a "color correction" tool which affords complex remapping of points in a hue/saturation plane relative to either HSL or HSV space.
Video editors also use these models. For example, both Avid and Final Cut Pro include color tools based on HSL or a similar geometry for use adjusting the color in video. With the Avid tool, users pick a vector by clicking a point within the hue/saturation circle to shift all the colors at some lightness level (shadows, mid-tones, highlights) by that vector.
Since version 4.0, Adobe Photoshop's "Luminosity", "Hue", "Saturation", and "Color" blend modes composite layers using a luma/chroma/hue color geometry. These have been copied widely, but several imitators use the HSL (e.g. PhotoImpact, Paint Shop Pro) or HSV geometries instead.
Use in image analysis
HSL, HSV, HSI, or related models are often used in computer vision and image analysis for feature detection or image segmentation. The applications of such tools include object detection, for instance in robot vision; object recognition, for instance of faces, text, or license plates; content-based image retrieval; and analysis of medical images.
For the most part, computer vision algorithms used on color images are straightforward extensions to algorithms designed for grayscale images, for instance k-means or fuzzy clustering of pixel colors, or canny edge detection. At the simplest, each color component is separately passed through the same algorithm. It is important, therefore, that the features of interest can be distinguished in the color dimensions used. Because the R, G, and B components of an object's color in a digital image are all correlated with the amount of light hitting the object, and therefore with each other, image descriptions in terms of those components make object discrimination difficult. Descriptions in terms of hue/lightness/chroma or hue/lightness/saturation are often more relevant.
Starting in the late 1970s, transformations like HSV or HSI were used as a compromise between effectiveness for segmentation and computational complexity. They can be thought of as similar in approach and intent to the neural processing used by human color vision, without agreeing in particulars: if the goal is object detection, roughly separating hue, lightness, and chroma or saturation is effective, but there is no particular reason to strictly mimic human color response. John Kender's 1976 master's thesis proposed the HSI model. Ohta et al. (1980) instead used a model made up of dimensions similar to those we have called I, α, and β. In recent years, such models have continued to see wide use, as their performance compares favorably with more complex models, and their computational simplicity remains compelling.
Disadvantages
While HSL, HSV, and related spaces serve well enough to, for instance, choose a single color, they ignore much of the complexity of color appearance. Essentially, they trade off perceptual relevance for computation speed, from a time in computing history (high-end 1970s graphics workstations, or mid-1990s consumer desktops) when more sophisticated models would have been too computationally expensive.
HSL and HSV are simple transformations of RGB which preserve symmetries in the RGB cube unrelated to human perception, such that its R, G, and B corners are equidistant from the neutral axis, and equally spaced around it. If we plot the RGB gamut in a more perceptually-uniform space, such as CIELAB (see below), it becomes immediately clear that the red, green, and blue primaries do not have the same lightness or chroma, or evenly spaced hues. Furthermore, different RGB displays use different primaries, and so have different gamuts. Because HSL and HSV are defined purely with reference to some RGB space, they are not absolute color spaces: to specify a color precisely requires reporting not only HSL or HSV values, but also the characteristics of the RGB space they are based on, including the gamma correction in use.
If we take an image and extract the hue, saturation, and lightness or value components, and then compare these to the components of the same name as defined by color scientists, we can quickly see the difference, perceptually. For example, examine the following images of a fire breather (). The original is in the sRGB colorspace. CIELAB L* is a CIE-defined achromatic lightness quantity (dependent solely on the perceptually achromatic luminance Y, but not the mixed-chromatic components X or Z, of the CIEXYZ colorspace from which the sRGB colorspace itself is derived), and it is plain that this appears similar in perceptual lightness to the original color image. Luma is roughly similar, but differs somewhat at high chroma, where it deviates most from depending solely on the true achromatic luminance (Y, or equivalently L*) and is influenced by the colorimetric chromaticity (x,y, or equivalently, a*,b* of CIELAB). HSL L and HSV V, by contrast, diverge substantially from perceptual lightness.
{{multiple image
| align = center
| image1 = Fire breathing 2 Luc Viatour.jpg
| width1 = 220
| alt1 = A full-color image shows a high-contrast and quite dramatic scene of a fire breather with a large orange-yellow flame extending from his lips. He wears dark but colorful orange-red clothing.
| caption1 = Fig. 13a. Color photograph (sRGB colorspace).
| image2 = Fire-breather CIELAB L*.jpg
| width2 = 220
| alt2 = A grayscale image showing the CIELAB lightness component of the photograph appears to be a faithful rendering of the scene: it looks roughly like a black-and-white photograph taken on panchromatic film would look, with clear detail in the flame, which is much brighter than the man's outfit or the background.
| caption2 = Fig. 13b. CIELAB L* (further transformed back to sRGB for consistent display).
| image3 = Fire-breather 601 Luma Y'.jpg
| width3 = 220
| alt3 = A grayscale image showing the luma appears roughly similar to the CIELAB lightness image, but is a bit brighter in areas which were originally very colorful.
| caption3 = Fig. 13c. Rec. 601 luma {{nobr|Y'''}}.
| footer =
}}
Though none of the dimensions in these spaces match their perceptual analogs, the value of HSV and the saturation of HSL are particular offenders. In HSV, the blue primary and white are held to have the same value, even though perceptually the blue primary has somewhere around 10% of the luminance of white (the exact fraction depends on the particular RGB primaries in use). In HSL, a mix of 100% red, 100% green, 90% blue – that is, a very light yellow – is held to have the same saturation as the green primary even though the former color has almost no chroma or saturation by the conventional psychometric definitions. Such perversities led Cynthia Brewer, expert in color scheme choices for maps and information displays, to tell the American Statistical Association:
If these problems make HSL and HSV problematic for choosing colors or color schemes, they make them much worse for image adjustment. HSL and HSV, as Brewer mentioned, confound perceptual color-making attributes, so that changing any dimension results in non-uniform changes to all three perceptual dimensions, and distorts all of the color relationships in the image. For instance, rotating the hue of a pure dark blue toward green will also reduce its perceived chroma, and increase its perceived lightness (the latter is grayer and lighter), but the same hue rotation will have the opposite impact on lightness and chroma of a lighter bluish-green – to (the latter is more colorful and slightly darker). In the example below (), the image (a) is the original photograph of a green turtle. In the image (b), we have rotated the hue (H) of each color by , while keeping HSV value and saturation or HSL lightness and saturation constant. In the image right (c), we make the same rotation to the HSL/HSV hue of each color, but then we force the CIELAB lightness (L*, a decent approximation of perceived lightness) to remain constant. Notice how the hue-shifted middle version without such a correction dramatically changes the perceived lightness relationships between colors in the image. In particular, the turtle's shell is much darker and has less contrast, and the background water is much lighter. Image (d) uses CIELAB to hue shift; the difference from (c) demonstrates the errors in hue and saturation.
Because hue is a circular quantity, represented numerically with a discontinuity at 360°, it is difficult to use in statistical computations or quantitative comparisons: analysis requires the use of circular statistics. Furthermore, hue is defined piecewise, in 60° chunks, where the relationship of lightness, value, and chroma to R, G, and B depends on the hue chunk in question. This definition introduces discontinuities, corners which can plainly be seen in horizontal slices of HSL or HSV.
Charles Poynton, digital video expert, lists the above problems with HSL and HSV in his Color FAQ, and concludes that:
Other cylindrical-coordinate color models
The creators of HSL and HSV were far from the first to imagine colors fitting into conic or spherical shapes, with neutrals running from black to white in a central axis, and hues corresponding to angles around that axis. Similar arrangements date back to the 18th century, and continue to be developed in the most modern and scientific models.
Color conversion formulae
To convert from HSL or HSV to RGB, we essentially invert the steps listed above (as before, ). First, we compute chroma, by multiplying saturation by the maximum chroma for a given lightness or value. Next, we find the point on one of the bottom three faces of the RGB cube which has the same hue and chroma as our color (and therefore projects onto the same point in the chromaticity plane). Finally, we add equal amounts of R, G, and B to reach the proper lightness or value.
To RGB
HSL to RGB
Given a color with hue , saturation , and lightness , we first find chroma:
Then we can find a point along the bottom three faces of the RGB cube, with the same hue and chroma as our color (using the intermediate value X for the second largest component of this color):
In the above equation, the notation refers to the remainder of the Euclidean division of by 2. is not necessarily an integer.
When is an integer, the "neighbouring" formula would yield the same result, as or , as appropriate.
Finally, we can find R, G, and B by adding the same amount to each component, to match lightness:
HSL to RGB alternative
The polygonal piecewise functions can be somewhat simplified by clever use of minimum and maximum values as well as the remainder operation.
Given a color with hue , saturation , and lightness , we first define the function:
where and:
And output R,G,B values (from ) are:
The above alternative formulas allow for shorter implementations. In the above formulas the operation also returns the fractional part of the module e.g. , and .
The base shape is constructed as follows: is a "triangle" for which values greater or equal to −1 start from k=2 and end at k=10, and the highest point is at k=6. Then by we change values bigger than 1 to equal 1. Then by we change values less than −1 to equal −1. At this point, we get something similar to the red shape from fig. 24 after a vertical flip (where the maximum is 1 and the minimum is −1). The R,G,B functions of use this shape transformed in the following way: modulo-shifted on (by ) (differently for R,G,B) scaled on (by ) and shifted on (by ).
We observe the following shape properties (Fig. 24 can help to get an intuition about them):
HSV to RGB
Given an HSV color with hue , saturation , and value , we can use the same strategy. First, we find chroma:
Then we can, again, find a point along the bottom three faces of the RGB cube, with the same hue and chroma as our color (using the intermediate value X for the second largest component of this color):
As before, when is an integer, "neighbouring" formulas would yield the same result.
Finally, we can find R, G, and B by adding the same amount to each component, to match value:
HSV to RGB alternative
Given a color with hue , saturation , and value , first we define function :
where and:
And output R,G,B values (from ) are:
Above alternative equivalent formulas allow shorter implementation. In above formulas the returns also fractional part of module e.g. the formula . The values of . The base shape
is constructed as follows: is "triangle" for which non-negative values starts from k=0, highest point at k=2 and "ends" at k=4, then we change values bigger than one to one by , then change negative values to zero by – and we get (for ) something similar to green shape from Fig. 24 (which max value is 1 and min value is 0). The R,G,B functions of use this shape transformed in following way: modulo-shifted on (by ) (differently for R,G,B) scaled on (by ) and shifted on (by ). We observe following shape properties(Fig. 24 can help to get intuition about this):
HSI to RGB
Given an HSI color with hue , saturation , and intensity , we can use the same strategy, in a slightly different order:
Where is the chroma.
Then we can, again, find a point along the bottom three faces of the RGB cube, with the same hue and chroma as our color (using the intermediate value X for the second largest component of this color):
Overlap (when is an integer) occurs because two ways to calculate the value are equivalent: or , as appropriate.
Finally, we can find R, G, and B by adding the same amount to each component, to match lightness:
Luma, chroma and hue to RGB
Given a color with hue , chroma , and luma , we can again use the same strategy. Since we already have H and C, we can straightaway find our point along the bottom three faces of the RGB cube:
Overlap (when is an integer) occurs because two ways to calculate the value are equivalent: or , as appropriate.
Then we can find R, G, and B by adding the same amount to each component, to match luma:
Interconversion
HSV to HSL
Given a color with hue , saturation , and value ,
HSL to HSV
Given a color with hue , saturation , and luminance ,
From RGB
This is a reiteration of the previous conversion.
Value must be in range .
With maximum component (i. e. value)
and minimum component
,
range (i. e. chroma)
and mid-range (i. e. lightness)
,
we get common hue:
and distinct saturations:
Swatches
Mouse over the swatches below to see the R, G, and B'' values for each swatch in a tooltip.
HSL
HSV
| Physical sciences | Basics | Physics |
252827 | https://en.wikipedia.org/wiki/RGBA%20color%20model | RGBA color model | RGBA stands for red green blue alpha. While it is sometimes described as a color space, it is actually a three-channel RGB color model supplemented with a fourth alpha channel. Alpha indicates how opaque each pixel is and allows an image to be combined over others using alpha compositing, with transparent areas and anti-aliasing of the edges of opaque regions. Each pixel is a 4D vector.
The term does not define what RGB color space is being used. It also does not state whether or not the colors are premultiplied by the alpha value, and if they are it does not state what color space that premultiplication was done in. This means more information than just "RGBA" is needed to determine how to handle an image.
In some contexts the abbreviation "RGBA" means a specific memory layout (called RGBA8888 below), with other terms such as "BGRA" used for alternatives. In other contexts "RGBA" means any layout.
Representation
In computer graphics, pixels encoding the RGBA color space information must be stored in computer memory (or in files on disk). In most cases four equal-sized pieces of adjacent memory are used, one for each channel, and a 0 in a channel indicates black color or transparent alpha, while all-1 bits indicates white or fully opaque alpha. By far the most common format is to store 8 bits (one byte) for each channel, which is 32 bits for each pixel.
The order of these four bytes in memory can differ, which can lead to confusion when image data is exchanged. These encodings are often denoted by the four letters in some order (most commonly RGBA). The interpretation of these 4-letter mnemonics is not well established. There are two typical ways to understand the mnemonic "RGBA":
In the byte-order scheme, "RGBA" is understood to mean a byte R, followed by a byte G, followed by a byte B, and followed by a byte A. This scheme is commonly used for describing file formats or network protocols, which are both byte-oriented.
In the word-order scheme, "RGBA" is understood to represent a complete 32-bit word, where R is more significant than G, which is more significant than B, which is more significant than A.
In a big-endian system, the two schemes are equivalent. This is not the case for a little-endian system, where the two mnemonics are reverses of each other. Therefore, to be unambiguous, it is important to state which ordering is used when referring to the encoding. This article will use a scheme that has some popularity, which is to add the suffix "8888" to indicate if 4 8-bit units or "32" if one 32-bit unit are being discussed.
RGBA8888
In OpenGL and Portable Network Graphics (PNG), the RGBA byte order is used, where the colors are stored in memory such that R is at the lowest address, G after it, B after that, and A last. On a little endian architecture this is equivalent to ABGR32.
In many systems when there are more than 8 bits per channel (such as 16 bits or floating-point), the channels are stored in RGBA order, even if 8-bit channels are stored in some other order.
ARGB32
The channels are arranged in memory in such manner that a single 32-bit unsigned integer has the alpha sample in the highest 8 bits, followed by the red sample, green sample and finally the blue sample in the lowest 8 bits:
ARGB values are typically expressed using 8 hexadecimal digits, with each pair of the hexadecimal digits representing the values of the Alpha, Red, Green and Blue channel, respectively. For example, 80FFFF00 represents 50.2% opaque (non-premultiplied) yellow. The 80 hex value, which is 128 in decimal, represents a 50.2% alpha value because 128 is approximately 50.2% of the maximum value of 255 (FF hex); to continue to decipher the 80FFFF00 value, the first FF represents the maximum value red can have; the second FF is like the previous but for green; the final 00 represents the minimum value blue can have (effectively – no blue). Consequently, red + green yields yellow. In cases where the alpha is not used this can be shortened to 6 digits RRGGBB, this is why it was chosen to put the alpha in the top bits. Depending on the context a 0x or a number sign (#) is put before the hex digits.
This layout became popular when 24-bit color (and 32-bit RGBA) was introduced on personal computers. At the time it was much faster and easier for programs to manipulate one 32-bit unit than four 8-bit units.
On little-endian systems, this is equivalent to BGRA byte order. On big-endian systems, this is equivalent to ARGB byte order.
RGBA32
In some software originating on big-endian machines such as Silicon Graphics, colors were stored in 32 bits similar to ARGB32, but with the alpha in the bottom 8 bits rather than the top. For example, 808000FF would be Red and Green:50.2%, Blue:0% and Alpha:100%, a brown. This is what you would get if RGBA8888 data was read as words on these machines. It is used in Portable Arbitrary Map and in FLTK, but in general it is rare.
The bytes are stored in memory on a little-endian machine in the order ABGR.
| Physical sciences | Basics | Physics |
253111 | https://en.wikipedia.org/wiki/ARPANET | ARPANET | The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first computer networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency (now DARPA) of the United States Department of Defense.
Building on the ideas of J. C. R. Licklider, Bob Taylor initiated the ARPANET project in 1966 to enable resource sharing between remote computers. Taylor appointed Larry Roberts as program manager. Roberts made the key decisions about the request for proposal to build the network. He incorporated Donald Davies' concepts and designs for packet switching, and sought input from Paul Baran on dynamic routing. In 1969, ARPA awarded the contract to build the Interface Message Processors (IMPs) for the network to Bolt Beranek & Newman (BBN). The design was led by Bob Kahn who developed the first protocol for the network. Roberts engaged Leonard Kleinrock at UCLA to develop mathematical methods for analyzing the packet network technology.
The first computers were connected in 1969 and the Network Control Protocol was implemented in 1970, development of which was led by Steve Crocker at UCLA and other graduate students, including Jon Postel and others. The network was declared operational in 1971. Further software development enabled remote login and file transfer, which was used to provide an early form of email. The network expanded rapidly and operational control passed to the Defense Communications Agency in 1975.
Bob Kahn moved to DARPA and, together with Vint Cerf at Stanford University, formulated the Transmission Control Program for internetworking. As this work progressed, a protocol was developed by which multiple separate networks could be joined into a network of networks; this incorporated concepts pioneered in the French CYCLADES project directed by Louis Pouzin. Version 4 of TCP/IP was installed in the ARPANET for production use in January 1983 after the Department of Defense made it standard for all military computer networking.
Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In the early 1980s, the NSF funded the establishment of national supercomputing centers at several universities and provided network access and network interconnectivity with the NSFNET project in 1986. The ARPANET was formally decommissioned in 1990, after partnerships with the telecommunication and computer industry had assured private sector expansion and commercialization of an expanded worldwide network, known as the Internet.
History
Inspiration
Historically, voice and data communications were based on methods of circuit switching, as exemplified in the traditional telephone network, wherein each telephone call is allocated a dedicated end-to-end electronic connection between the two communicating stations. The connection is established by switching systems that connected multiple intermediate call legs between these systems for the duration of the call.
The traditional model of the circuit-switched telecommunication network was challenged in the early 1960s by Paul Baran at the RAND Corporation, who had been researching systems that could sustain operation during partial destruction, such as by nuclear war. He developed the theoretical model of distributed adaptive message block switching. However, the telecommunication establishment rejected the development in favor of existing models. Donald Davies at the United Kingdom's National Physical Laboratory (NPL) independently arrived at a similar concept in 1965.
The earliest ideas for a computer network intended to allow general communications among computer users were formulated by computer scientist J. C. R. Licklider of Bolt Beranek and Newman (BBN), in April 1963, in memoranda discussing the concept of the "Intergalactic Computer Network", along with Alexis Vidal of Russia. Those ideas encompassed many of the features of the contemporary Internet. In October 1963, Licklider was appointed head of the Behavioral Sciences and Command and Control programs at the Defense Department's Advanced Research Projects Agency (ARPA). He convinced Ivan Sutherland and Bob Taylor that this network concept was very important and merited development, although Licklider left ARPA before any contracts were assigned for development.
Sutherland and Taylor continued their interest in creating the network, in part, to allow ARPA-sponsored researchers at various corporate and academic locales to utilize computers provided by ARPA, and, in part, to quickly distribute new software and other computer science results. Taylor had three computer terminals in his office, each connected to separate computers, which ARPA was funding: one for the System Development Corporation (SDC) Q-32 in Santa Monica, one for Project Genie at the University of California, Berkeley, and another for Multics at the Massachusetts Institute of Technology. Taylor recalls the circumstance: "For each of these three terminals, I had three different sets of user commands. So, if I was talking online with someone at S.D.C., and I wanted to talk to someone I knew at Berkeley, or M.I.T., about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. I said, 'Oh Man!', it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go. That idea is the ARPANET".
Donald Davies' work caught the attention of ARPANET developers at Symposium on Operating Systems Principles in October 1967. He gave the first public presentation, having coined the term packet switching, in August 1968 and incorporated it into the NPL network in England. The NPL network and ARPANET were the first two networks in the world to implement packet switching. Roberts said the computer networks built in the 1970s were similar "in nearly all respects" to Davies' original 1965 design.
Creation
In February 1966, Bob Taylor successfully lobbied ARPA's Director Charles M. Herzfeld to fund a network project. Herzfeld redirected funds in the amount of one million dollars from a ballistic missile defense program to Taylor's budget. Taylor hired Larry Roberts as a program manager in the ARPA Information Processing Techniques Office in January 1967 to work on the ARPANET. Roberts met Paul Baran in February 1967, but did not discuss networks.
Roberts asked Frank Westervelt to explore the questions of message size and contents for the network, and to write a position paper on the intercomputer communication protocol including “conventions for character and block transmission, error checking and re-transmission, and computer and user identification." In April 1967, ARPA held a design session on technical standards. The initial standards for identification and authentication of users, transmission of characters, and error checking and retransmission procedures were discussed. Roberts' proposal was that all mainframe computers would connect to one another directly. The other investigators were reluctant to dedicate these computing resources to network administration. After the design session, Wesley Clark proposed minicomputers should be used as an interface to create a message switching network. Roberts modified the ARPANET plan to incorporate Clark's suggestion and named the minicomputers Interface Message Processors (IMPs).
The plan was presented at the inaugural Symposium on Operating Systems Principles in October 1967. Donald Davies' work on packet switching and the NPL network, presented by a colleague (Roger Scantlebury), and that of Paul Baran, came to the attention of the ARPA investigators at this conference. Roberts applied Davies' concept of packet switching for the ARPANET, and sought input from Paul Baran on dynamic routing. The NPL network was using line speeds of 768 kbit/s, and the proposed line speed for the ARPANET was upgraded from 2.4 kbit/s to 50 kbit/s.
By mid-1968, Roberts and Barry Wessler wrote a final version of the IMP specification based on a Stanford Research Institute (SRI) report that ARPA commissioned to write detailed specifications describing the ARPANET communications network. Roberts gave a report to Taylor on 3 June, who approved it on 21 June. After approval by ARPA, a Request for Quotation (RFQ) was issued for 140 potential bidders. Most computer science companies regarded the ARPA proposal as outlandish, and only twelve submitted bids to build a network; of the twelve, ARPA regarded only four as top-rank contractors. At year's end, ARPA considered only two contractors and awarded the contract to build the network to BBN in January 1969.
The initial, seven-person BBN team were much aided by the technical specificity of their response to the ARPA RFQ, and thus quickly produced the first working system. The "IMP guys" were led by Frank Heart; the theoretical design of the network was led by Bob Kahn; the team included Dave Walden, Severo Ornstein, William Crowther and several others. The BBN-proposed network closely followed Roberts' ARPA plan: a network composed of small computers, the IMPs (similar to the later concept of routers), that functioned as gateways interconnecting local resources. Routing, flow control, software design and network control were developed by the BBN team. At each site, the IMPs performed store-and-forward packet switching functions and were interconnected with leased lines via telecommunication data sets (modems), with initial data rates of . The host computers were connected to the IMPs via custom serial communication interfaces. The system, including the hardware and the packet switching software, was designed and installed in nine months. The BBN team continued to interact with the NPL team with meetings between them taking place in the U.S. and the U.K.
As with the NPL network, the first-generation IMPs were built by BBN using a rugged computer version of the Honeywell DDP-516 computer, configured with of expandable magnetic-core memory, and a 16-channel Direct Multiplex Control (DMC) direct memory access unit. The DMC established custom interfaces with each of the host computers and modems. In addition to the front-panel lamps, the DDP-516 computer also features a special set of 24 indicator lamps showing the status of the IMP communication channels. Each IMP could support up to four local hosts and could communicate with up to six remote IMPs via early Digital Signal 0 leased telephone lines. The network connected one computer in Utah with three in California. Later, the Department of Defense allowed the universities to join the network for sharing hardware and software resources.
Debate about design goals
According to Charles Herzfeld, ARPA Director (1965–1967):
The ARPANET used distributed computation and incorporated frequent re-computation of routing tables (automatic routing was technically challenging at the time). These features increased the survivability of the network in the event of significant interruption. Furthermore, the ARPANET was designed to survive subordinate network losses. However, the Internet Society agrees with Herzfeld in a footnote in their online article, A Brief History of the Internet:
Paul Baran, the first to put forward a theoretical model for communication using packet switching, conducted the RAND study referenced above. Though the ARPANET did not exactly share Baran's project's goal, he said his work did contribute to the development of the ARPANET. Minutes taken by Elmer Shapiro of Stanford Research Institute at the ARPANET design meeting of 9–10 October 1967 indicate that a version of Baran's routing method ("hot potato") may be used, consistent with the NPL team's proposal at the Symposium on Operating System Principles in Gatlinburg.
Later, in the 1970s, ARPA did emphasize the goal of "command and control". According to Stephen J. Lukasik, who was deputy director (1967–1970) and Director of DARPA (1970–1975):
Implementation
The first four nodes were designated as a testbed for developing and debugging the 1822 protocol, which was a major undertaking. While they were connected electronically in 1969, network applications were not possible until the Network Control Protocol was implemented in 1970 enabling the first two host-host protocols, remote login (Telnet) and file transfer (FTP) which were specified and implemented between 1969 and 1973. The network was declared operational in 1971. Network traffic began to grow once email was established at the majority of sites by around 1973.
Initial four hosts
The initial ARPANET configuration linked UCLA, ARC, UCSB, and the University of Utah School of Computing. The first node was created at UCLA, where Leonard Kleinrock could evaluate network performance and examine his theories on message delay. The locations were selected not only to reduce leased line costs but also because each had specific expertise beneficial for this initial implementation phase:
University of California, Los Angeles (UCLA), where Kleinrock had established a Network Measurement Center (NMC), with an SDS Sigma 7 being the first computer attached to it;
The Augmentation Research Center at Stanford Research Institute (now SRI International), where Douglas Engelbart had created the new NLS system, an early hypertext system, and would run the Network Information Center (NIC), with the SDS 940 that ran NLS, named "Genie", being the first host attached;
University of California, Santa Barbara (UCSB), with the Culler-Fried Interactive Mathematics Center's IBM 360/75, running OS/MVT being the machine attached;
The University of Utah School of Computing, where Ivan Sutherland had moved, running a DEC PDP-10 operating on TENEX.
The first successful host-to-host connection on the ARPANET was made between Stanford Research Institute (SRI) and UCLA, by SRI programmer Bill Duvall and UCLA student programmer Charley Kline, at 10:30 pm PST on 29 October 1969 (6:30 UTC on 30 October 1969). Kline connected from UCLA's SDS Sigma 7 Host computer (in Boelter Hall room 3420) to the Stanford Research Institute's SDS 940 Host computer. Kline typed the command "login," but initially the SDS 940 crashed after he typed two characters. About an hour later, after Duvall adjusted parameters on the machine, Kline tried again and successfully logged in. Hence, the first two characters successfully transmitted over the ARPANET were "lo". The first permanent ARPANET link was established on 21 November 1969, between the IMP at UCLA and the IMP at the Stanford Research Institute. By 5 December 1969, the initial four-node network was established.
Elizabeth Feinler created the first Resource Handbook for ARPANET in 1969 which led to the development of the ARPANET directory. The directory, built by Feinler and a team made it possible to navigate the ARPANET.
Network performance
In 1968, Roberts contracted with Kleinrock to measure the performance of the network and find areas for improvement. Building on his earlier work on queueing theory and optimization of packet delay in communication networks, Kleinrock specified mathematical models of the performance of packet-switched networks, which underpinned the development of the ARPANET as it expanded rapidly in the early 1970s.
Growth and evolution
Roberts engaged Howard Frank to consult on the topological design of the network. Frank made recommendations to increase throughput and reduce costs in a scaled-up network. By March 1970, the ARPANET reached the East Coast of the United States, when an IMP at BBN in Cambridge, Massachusetts was connected to the network. Thereafter, the ARPANET grew: 9 IMPs by June 1970 and 13 IMPs by December 1970, then 18 by September 1971 (when the network included 23 university and government hosts); 29 IMPs by August 1972, and 40 by September 1973. By June 1974, there were 46 IMPs, and in July 1975, the network numbered 57 IMPs. By 1981, the number was 213 host computers, with another host connecting approximately every twenty days.
Support for inter-IMP circuits of up to 230.4 kbit/s was added in 1970, although considerations of cost and IMP processing power meant this capability was not actively used.
Larry Roberts saw the ARPANET and NPL projects as complementary and sought in 1970 to connect them via a satellite link. Peter Kirstein's research group at University College London (UCL) was subsequently chosen in 1971 in place of NPL for the UK connection. In June 1973, a transatlantic satellite link connected ARPANET to the Norwegian Seismic Array (NORSAR), via the Tanum Earth Station in Sweden, and onward via a terrestrial circuit to a TIP at UCL. UCL provided a gateway for interconnection of the ARPANET with British academic networks, the first international resource sharing network, and carried out some of the earliest experimental research work on internetworking.
1971 saw the start of the use of the non-ruggedized (and therefore significantly lighter) Honeywell 316 as an IMP. It could also be configured as a Terminal Interface Processor (TIP), which provided terminal server support for up to 63 ASCII serial terminals through a multi-line controller in place of one of the hosts. The 316 featured a greater degree of integration than the 516, which made it less expensive and easier to maintain. The 316 was configured with 40 kB of core memory for a TIP. The size of core memory was later increased, to 32 kB for the IMPs, and 56 kB for TIPs, in 1973.
The ARPANET was demonstrated at the International Conference on Computer Communications in October 1972.
In 1975, BBN introduced IMP software running on the Pluribus multi-processor. These appeared in a few sites. In 1981, BBN introduced IMP software running on its own C/30 processor product.
Operation
ARPA was intended to fund advanced research. The ARPANET was a research project that was communications-oriented, rather than user-oriented in design. Nonetheless, in the summer of 1975, operational control of the ARPANET passed to the Defense Communications Agency. At about this time, the first ARPANET encryption devices were deployed to support classified traffic.
The ARPANET Completion Report, written in 1978 and published in 1981 jointly by BBN and DARPA, concludes that:
CSNET, expansion
Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET).
Adoption of TCP/IP
The transatlantic connectivity with NORSAR and UCL later evolved into the SATNET. The ARPANET, SATNET and PRNET were interconnected in 1977.
The DoD made TCP/IP the standard communication protocol for all military computer networking in 1980. NORSAR and University College London left the ARPANET and began using TCP/IP over SATNET in 1982.
On January 1, 1983, known as flag day, TCP/IP protocols became the standard for the ARPANET, replacing the earlier Network Control Protocol.
MILNET, phasing out
In September 1984 work was completed on restructuring the ARPANET giving U.S. military sites their own Military Network (MILNET) for unclassified defense department communications. Both networks carried unclassified information and were connected at a small number of controlled gateways which would allow total separation in the event of an emergency. MILNET was part of the Defense Data Network (DDN).
Separating the civil and military networks reduced the 113-node ARPANET by 68 nodes. After MILNET was split away, the ARPANET would continue to be used as an Internet backbone for researchers, but be slowly phased out.
Decommissioning
In 1985, the NSF funded the establishment of national supercomputing centers at several universities and provided network access and network interconnectivity with the NSFNET project in 1986. NSFNET became the Internet backbone for government agencies and universities.
The ARPANET project was formally decommissioned in 1990. The original IMPs and TIPs were phased out as the ARPANET was shut down after the introduction of the NSFNet, but some IMPs remained in service as late as July 1990.
In the wake of the decommissioning of the ARPANET on 28 February 1990, Vinton Cerf wrote the following lamentation, entitled "Requiem of the ARPANET":
Legacy
The technological advancements and practical applications achieved through the ARPANET were instrumental in shaping modern computer networking including the Internet. Development and implementation of the concepts of packet switching, decentralized networks, and communication protocols, notably TCP/IP, laid the foundation for a global network that revolutionized communication, information sharing and collaborative research across the world.
The ARPANET was related to many other research projects, which either influenced the ARPANET design, were ancillary projects, or spun out of the ARPANET.
Senator Al Gore authored the High Performance Computing and Communication Act of 1991, commonly referred to as "The Gore Bill", after hearing the 1988 concept for a National Research Network submitted to Congress by a group chaired by Leonard Kleinrock. The bill was passed on 9 December 1991 and led to the National Information Infrastructure (NII) which Gore called the information superhighway.
The ARPANET project was honored with two IEEE Milestones, both dedicated in 2009.
Software and protocols
IMP functionality
Because it was never a goal for the ARPANET to support IMPs from vendors other than BBN, the IMP-to-IMP protocol and message format were not standardized. However, the IMPs did nonetheless communicate amongst themselves to perform link-state routing, to do reliable forwarding of messages, and to provide remote monitoring and management functions to ARPANET's Network Control Center. Initially, each IMP had a 6-bit identifier and supported up to 4 hosts, which were identified with a 2-bit index. An ARPANET host address, therefore, consisted of both the port index on its IMP and the identifier of the IMP, which was written with either port/IMP notation or as a single byte; for example, the address of MIT-DMG (notable for hosting development of Zork) could be written as either 1/6 or 70. An upgrade in early 1976 extended the host and IMP numbering to 8-bit and 16-bit, respectively.
In addition to primary routing and forwarding responsibilities, the IMP ran several background programs, titled TTY, DEBUG, PARAMETER-CHANGE, DISCARD, TRACE, and STATISTICS. These were given host numbers in order to be addressed directly and provided functions independently of any connected host. For example, "TTY" allowed an on-site operator to send ARPANET packets manually via the teletype connected directly to the IMP.
1822 protocol
The starting point for host-to-host communication on the ARPANET in 1969 was the 1822 protocol, which defined the transmission of messages to an IMP. The message format was designed to work unambiguously with a broad range of computer architectures. An 1822 message essentially consisted of a message type, a numeric host address, and a data field. To send a data message to another host, the transmitting host formatted a data message containing the destination host's address and the data message being sent, and then transmitted the message through the 1822 hardware interface. The IMP then delivered the message to its destination address, either by delivering it to a locally connected host, or by delivering it to another IMP. When the message was ultimately delivered to the destination host, the receiving IMP would transmit a Ready for Next Message (RFNM) acknowledgment to the sending, host IMP.
Network Control Protocol
Unlike modern Internet datagrams, the ARPANET was designed to reliably transmit 1822 messages, and to inform the host computer when it loses a message; the contemporary IP is unreliable, whereas the TCP is reliable. Nonetheless, the 1822 protocol proved inadequate for handling multiple connections among different applications residing in a host computer. This problem was addressed with the Network Control Protocol (NCP), which provided a standard method to establish reliable, flow-controlled, bidirectional communications links among different processes in different host computers. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept later incorporated in the OSI model.
NCP was developed under the leadership of Steve Crocker, then a graduate student at UCLA. Crocker created and led the Network Working Group (NWG) which was made up of a collection of graduate students at universities and research laboratories, including Jon Postel and Vint Cerf at UCLA. They were sponsored by ARPA to carry out the development of the ARPANET and the software for the host computers that supported applications.
Network applications
NCP provided a standard set of network services that could be shared by several applications running on a single host computer. This led to the evolution of application protocols that operated, more or less, independently of the underlying network service, and permitted independent advances in the underlying protocols.
The various application protocols such as TELNET for remote time-sharing access and File Transfer Protocol (FTP), the latter used to enable rudimentary electronic mail, were developed and eventually ported to run over the TCP/IP protocol suite. In the 1980s, FTP for email was replaced by the Simple Mail Transfer Protocol and, later, POP and IMAP.
Telnet was developed in 1969 beginning with RFC 15, extended in RFC 855.
The original specification for the File Transfer Protocol was written by Abhay Bhushan and published as on 16 April 1971. By 1973, the File Transfer Protocol (FTP) specification had been defined () and implemented, enabling file transfers over the ARPANET.
In 1971, Ray Tomlinson, of BBN sent the first network e-mail (, ). An ARPA study in 1973, a year after network e-mail was introduced to the ARPANET community, found that three-quarters of the traffic over the ARPANET consisted of email messages. E-mail remained a very large part of the overall ARPANET traffic.
The Network Voice Protocol (NVP) specifications were defined in 1977 (), and implemented. But, because of technical shortcomings, conference calls over the ARPANET never worked well; the contemporary Voice over Internet Protocol (packet voice) was decades away.
TCP/IP
Stephen J. Lukasik directed DARPA to focus on internetworking research in the early 1970s. Bob Kahn moved from BBN to DARPA in 1972, first as program manager for the ARPANET, under Larry Roberts, then as director of the IPTO when Roberts left to found Telenet. Kahn worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. Steve Crocker, now at DARPA, and the leaders of British and French network projects founded the International Network Working Group in 1972 and, on Crocker's recommendation, Vint Cerf, now on the faculty at Stanford University, became its Chair. This group considered how to interconnect packet switching networks with different specifications, that is, internetworking. Research led by Kahn and Cerf resulted in the formulation of the Transmission Control Program, which incorporated concepts from the French CYCLADES project directed by Louis Pouzin. Its specification was written by Cerf with Yogen Dalal and Carl Sunshine at Stanford in December 1974 (). The following year, testing began through concurrent implementations at Stanford, BBN and University College London. At first a monolithic design, the software was redesigned as a modular protocol stack in version 3 in 1978. Version 4 was installed in the ARPANET for production use in January 1983, replacing NCP. The development of the complete Internet protocol suite by 1989, as outlined in and , and partnerships with the telecommunication and computer industry laid the foundation for the adoption of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet.
Password protection
The Purdy Polynomial hash algorithm was developed for the ARPANET to protect passwords in 1971 at the request of Larry Roberts, head of ARPA at that time. It computed a polynomial of degree 224 + 17 modulo the 64-bit prime p = 264 − 59. The algorithm was later used by Digital Equipment Corporation (DEC) to hash passwords in the VMS operating system and is still being used for this purpose.
Rules and etiquette
Because of its government funding, certain forms of traffic were discouraged or prohibited.
Leonard Kleinrock claims to have committed the first illegal act on the Internet, having sent a request for return of his electric razor after a meeting in England in 1973. At the time, use of the ARPANET for personal reasons was unlawful.
In 1978, against the rules of the network, Gary Thuerk of Digital Equipment Corporation (DEC) sent out the first mass email to approximately 400 potential clients via the ARPANET. He claims that this resulted in $13 million worth of sales in DEC products, and highlighted the potential of email marketing.
A 1982 handbook on computing at MIT's AI Lab stated regarding network etiquette:
In popular culture
Computer Networks: The Heralds of Resource Sharing, a 30-minute documentary film featuring Fernando J. Corbató, J. C. R. Licklider, Lawrence G. Roberts, Robert Kahn, Frank Heart, William R. Sutherland, Richard W. Watson, John R. Pasta, Donald W. Davies, and economist, George W. Mitchell.
"Scenario", an episode of the U.S. television sitcom Benson (season 6, episode 20—dated February 1985), was the first incidence of a popular TV show directly referencing the Internet or its progenitors. The show includes a scene in which the ARPANET is accessed.
There is an electronic music artist known as "Arpanet", Gerald Donald, one of the members of Drexciya. The artist's 2002 album Wireless Internet features commentary on the expansion of the internet via wireless communication, with songs such as NTT DoCoMo, dedicated to the mobile communications giant based in Japan.
Thomas Pynchon mentions the ARPANET in his 2009 novel Inherent Vice, which is set in Los Angeles in 1970, and in his 2013 novel Bleeding Edge.
The 1993 television series The X-Files featured the ARPANET in a season 5 episode, titled "Unusual Suspects". John Fitzgerald Byers offers to help Susan Modeski (known as Holly ... "just like the sugar") by hacking into the ARPANET to obtain sensitive information.
In the spy-drama television series The Americans, a Russian scientist defector offers access to ARPANET to the Russians in a plea to not be repatriated (Season 2 Episode 5 "The Deal"). Episode 7 of Season 2 is named 'ARPANET' and features Russian infiltration to bug the network.
In the television series Person of Interest, main character Harold Finch hacked the ARPANET in 1980 using a homemade computer during his first efforts to build a prototype of the Machine. This corresponds with the real life virus that occurred in October of that year that temporarily halted ARPANET functions. The ARPANET hack was first discussed in the episode 2PiR (stylized 2R) where a computer science teacher called it the most famous hack in history and one that was never solved. Finch later mentioned it to Person of Interest Caleb Phipps and his role was first indicated when he showed knowledge that it was done by "a kid with a homemade computer" which Phipps, who had researched the hack, had never heard before.
In the third season of the television series Halt and Catch Fire, the character Joe MacMillan explores the potential commercialization of the ARPANET.
| Technology | Internet | null |
253333 | https://en.wikipedia.org/wiki/Golden%20Delicious | Golden Delicious | Golden Delicious is a cultivar of apple. It is one of the 15 most popular apple cultivars in the United States. It is not closely related to Red Delicious.
History
Golden Delicious arose from a chance seedling, possibly a hybrid of Grimes Golden and Golden Reinette. The original tree was found on the family farm of J. M. Mullins in Clay County, West Virginia, and was locally known as Mullins' Yellow Seedling. Mullins sold the tree and propagation rights to Stark Brothers Nurseries for $5000, which first marketed it as a companion of their Red Delicious in 1914.
In 1943, the New York State Agricultural Experiment Station in Geneva, New York developed the Jonagold apple by cross-breeding Golden Delicious and Jonathan trees. The cultivar was officially released in 1968 and went on to become the leading apple cultivar in Europe. According to the USApple Association website, , Golden Delicious, along with its descendent cultivars Gala, Ginger Gold, Honeycrisp, and Jonagold, were among the fifteen most popular apple cultivars in the United States.
Golden Delicious was designated the official state fruit of West Virginia by a Senate resolution on February 20, 1995. Clay County has hosted an annual Golden Delicious Festival since 1972.
In 2010, an Italian-led consortium announced they had decoded the complete genome of the Golden Delicious apple. It had the highest number of genes (57,000) of any plant genome studied to date.
Golden Delicious was one of four apples honored by the United States Postal Service in a 2013 set of four 33¢ stamps commemorating historic strains, joined by Northern Spy, Baldwin, and Granny Smith.
Appearance and flavor
Golden Delicious is a large, yellowish-green skinned cultivar and very sweet to the taste. It is prone to bruising and shriveling, so it needs careful handling and storage. It is a favorite for eating plain, as well as for use in salads, apple sauce, and apple butter. America's Test Kitchen, Food Network, and Serious Eats all list Golden Delicious apples as one of the best apples for baking apple pie due to its balanced flavor and its high pectin content that allows it to stay intact when cooked.
Density 0.79 g/cc
Sugar 13.5%
Acid 5.6 gram/litre
Vitamin C 10–20 mg/litre
Season
Golden Delicious are harvested 130–150 days after full bloom.
Golden Delicious mutants
Lucky Rose Golden A patented Golden Delicious mutant
Descendant cultivars
| Biology and health sciences | Pomes | Plants |
253340 | https://en.wikipedia.org/wiki/Peking%20Man | Peking Man | Peking Man (Homo erectus pekinensis) is a subspecies of H. erectus which inhabited what is now northern China during the Middle Pleistocene. Its fossils have been found in a cave some southwest of Beijing (then referred to in the West as Peking), known as the Zhoukoudian Peking Man Site. The first fossil, a tooth, was discovered in 1921, and Zhoukoudian has since become the most productive H. erectus site in the world. Peking Man was instrumental in the foundation of Chinese anthropology, and fostered an important dialogue between Western and Eastern science. Peking Man became the centre of anthropological discussion, and was classified as a direct human ancestor, propping up the Out of Asia theory that humans evolved in Asia.
Peking Man also played a vital role in the restructuring of the Chinese identity following the Chinese Communist Revolution, and was intensively communicated to the general populace to introduce them to Marxism and science. Early models of Peking Man society strongly leaned towards communist or nationalist ideals, leading to discussions on primitive communism and polygenism. This produced a strong schism between Western and Eastern interpretations, especially as the West adopted the Out of Africa theory in the late 20th century, and Peking Man's role in human evolution diminished as merely an offshoot of the human line. Though Out of Africa is now the consensus, Peking Man interbreeding with human ancestors is discussed especially in Chinese circles.
Peking Man characterises the "classic" H. erectus anatomy. The skull is long and heavily fortified, featuring an inflated bar of bone circumscribing the crown, crossing along the brow ridge, over the ears, and connecting at the back of the skull, as well as a sagittal keel running across the midline. The bone of the skull and long bones is exorbitantly thickened. The face is protrusive (midfacial prognathism), the eye sockets are wide, the jaws are robust and chinless, the teeth are large, and the incisors are shovel-shaped. Brain volume ranged from 850 to 1,225 cc, for an average of just over 1,000 cc (within the range of variation for modern humans). The limbs are broadly anatomically comparable to those of modern humans. H. erectus in such northerly latitudes may have averaged roughly in height, compared to for more tropical populations.
Peking Man lived in a cool, predominantly steppe, partially forested environment, alongside deer, rhinos, elephants, bison, buffalo, bears, wolves, big cats, and other animals. Peking Man intermittently inhabited Zhoukoudian from potentially as far back as 800,000 years ago to as recent as 230,000 years ago, but the precise chronology is unclear. This spans several cold glacial and warm interglacial periods. The cultural complexity of Peking Man is fiercely debated. If Peking Man was capable of hunting (as opposed to predominantly scavenging), making clothes, and controlling fire, the population would have been well-equipped to survive frigid glacial periods. If not, the population would have had to retreat southward and return later. It is further disputed if Peking Man inhabited the cave, or was killed by giant hyenas (Pachycrocuta) and dumped there, in addition to other natural processes. Over 100,000 pieces of stone tools were recovered from Zhoukoudian, mainly wastage; but also many simple choppers and flakes, and a few retouched tools such as scrapers and possibly burins.
Taxonomy
Research history
Discovery
To aid the China Geological Survey map out economically relevant deposits, the Geological Survey of Sweden sent Swedish economic geologist Johan Gunnar Andersson to China in 1914. Andersson soon also began collecting archaeological finds and "dragon bones", as well as documenting Chinese mythology. In 1918, while in Beijing (then referred to in the West as Peking), he was pointed by American chemistry teacher John McGregor Gibb towards a potentially interesting fossil deposit in the mining town of Zhoukoudian in the Fangshan District, about southwest. When he visited a month later, he was directed towards an old limestone quarry which the locals called Chi Ku Shan ("Chicken Bone Hill"), because they believed the many rodent fossils found there belonged to chickens stolen by a malevolent group of foxes which turned into evil trickster spirits and drove a man insane.
Andersson had to leave China to work on other projects, but returned in 1921 with prominent American palaeontologist Walter W. Granger and Austrian palaeontologist Otto Zdansky, a recent graduate of the Palaeontological Museum of Uppsala University. Andersson decided the Chi Ku Shan locality would be an excellent training ground for Zdansky before moving onto the Henan Province to excavate Hipparion (horse) fossils. They were advised by a local that more interesting "dragon bones" could be found at a nearby fissure in a limestone cliff, later named Longgushan ("Dragon Bone Hill") locality. Zdansky found the first fossil human tooth in the site that year, specimen PMU M3550, but he did not report it to Andersson. While studying the Zhoukoudian material in Uppsala, Zdansky identified another human tooth, and reported his find (which he cautiously labelled as Homo sp.?) to his mentor Professor Carl Wiman. News of this reached Andersson in 1926 after corresponding with Wiman.
As part of his world tour, the crown prince of Sweden (and the chairman of the Swedish China Research Committee, Andersson's benefactor) Gustaf VI Adolf visited Beijing on 22 October 1926. In a meeting planned for the prince, Andersson presented lantern slides of Zdansky's fossil teeth, and was able to convince his friend Canadian palaeanthropologist Davidson Black (who worked for the Peking Union Medical College funded by the Rockefeller Foundation), Chinese geologist Weng Wenhao (the head of the China Geological Survey), and prominent French palaeoanthropologist Pierre Teilhard de Chardin to jointly take over study of Zhoukoudian. Andersson returned to Sweden to become the founding director of the Museum of Far Eastern Antiquities, Stockholm. In the press coverage immediately after the meeting, German-American geologist Amadeus William Grabau for the first time publicly used the phrase "Peking Man" to refer to Zdansky's fossil teeth.
In 1927, with Black too preoccupied with his duties to the college, Andersson and Wiman sent one of Wiman's students, Anders Birger Bohlin, to oversee excavation beginning on 16 April. On 16 October, Bohlin extracted another fossil human tooth, specimen K11337, which Black made the holotype of a new genus and species a few weeks later, Sinanthropus pekinensis (accrediting the authority to both himself and Zdansky). His decision to so quickly name a new genus may have been politically motivated, to secure further funding of the site after nearly a year of no anthropologically relevant finds, especially since Teilhard questioned whether Peking Man was actually a human or some carnivore. That year, Weng drafted an agreement with all Zhoukoudian scientists at the time that the Zhoukoudian remains remain in China. In 1928, the Chinese government similarly clamped down on the exportation of Chinese artefacts and other archaeologically relevant materials to the West for study, viewing it as archaeological looting; foreign scientists were instead encouraged to research these materials within China. In 1929, Black persuaded the Peking Union Medical College, the Geological Survey of China, and the Rockefeller Foundation to found and fund the Cenozoic Research Laboratory and ensure further study of Zhoukoudian.
On 2 December 1929, Chinese anthropologist Pei Wenzhong discovered a surprisingly complete skullcap, and Zhoukoudian proved to be a valuable archaeological site, with a preponderance of human fossils, stone tools, and potential evidence of early fire use, becoming the most productive Homo erectus site in the world. An additional four rather complete skullcaps were discovered by 1936, three of which were unearthed over an 11-day period in November 1936, overseen by Chinese palaeoanthropologist Jia Lanpo. Excavation employed 10 to over 100 local labourers depending on the stage, who were paid five or six jiao per day, in contrast to local coal miners who received a pittance of 40 to 50 yuan annually. Zhoukoudian also employed some of the biggest names in Western and Chinese geology, palaeontology, palaeoanthropology, and archaeology, and facilitated an important discourse and collaboration between these two civilisations. After Black's sudden death in 1934 from his congenital heart defect, Jewish anatomist Franz Weidenreich, who fled Nazi Germany, was selected by the Rockefeller Foundation to continue Black's work.
Loss of specimens
Excavation of Zhoukoudian began to stall after the Marco Polo Bridge incident on 7 July 1937, and the outbreak of the Second Sino-Japanese War. Weidenreich had two crates made to store the Peking Man fossils, and transferred them from the Peking Union Medical College to an American bank vault to safeguard them from Imperial Japanese forces. They were soon returned to the college and stored in a safe in Weidenreich's office, where Weidenreich worked with technicians and artists to make plaster casts and detailed illustrations for his monograph describing the fossils. As the war progressed, Weng and Weidenreich unfruitfully tried to convince the head of the college, Henry S. Houghton, to authorise a transfer of the Peking Man fossils to the United States for safekeeping. Houghton dismissed Weidenreich in 1941, who took the casts and research notes with him to the American Museum of Natural History in New York City with funding by the Rockefeller Foundation.
By September 1941, Weng and the president of the Rockefeller Foundation Raymond B. Fosdick had persuaded the U.S. embassy to authorise the transfer of the Peking Man fossils. Representing at least 40 different individuals, the fossils were put into two wooden footlockers and were to be transported by the United States Marine Corps from the Peking Union Medical College to the SS President Harrison which was to dock at Qinhuangdao Port (near the Marine basecamp Camp Holcomb), and eventually arrive at the American Museum of Natural History. En route to Qinhuangdao, the ship was attacked by Japanese warships, and ran aground. Though there have been many attempts to locate the footlockers—including offering large cash rewards—it is unknown what happened to them after they left the college on 4 December 1941.
Rumours about the fate of the fossils range from being onboard a sunken ship (such as the Japanese Awa Maru) to being ground up for traditional Chinese medicine. The affair also provoked allegations of robbery against Japanese or American groups, especially during the Resist America, Aid Korea Campaign in 1950 and 1951 to promote anti-American sentiment during the Korean War. US Marine Corporal Richard Bowen recalled finding a box filled with bones while digging a foxhole one night next to some stone barracks in Qinhuangdao. This happened in 1947 while the city was under siege by the CCP Eighth Route Army, who were under fire from Nationalist gunboats (a conflict of the Chinese Civil War). According to Mr. Wang Qingpu who had written a report for the Chinese government on the history of the port, if Bowen's story is accurate, the most probable location of the fossils is underneath roads, a warehouse, or a parking lot.
Excavation of the Zhoukoudian was so well documented that the loss of the original specimens did not greatly impact their study.
Four of the teeth from the original excavation period are still in the possession of the Palaeontological Museum of Uppsala University.
Mao and post-Mao eras
Excavation of Zhoukoudian halted from 1941 until the conclusion of the Chinese Civil War in 1949. Field work took place in 1949, 1951, 1958–1960, 1966, and 1978–1981. Given the meticulousness of the dig teams, going so far as to sieve out unidentifiable fragments as small as long, excavation of Zhoukoudian is generally considered to be more or less complete.
Through the Mao era, but especially in 1950 and 1951, Peking Man took on a central role in the restructuring of the Chinese identity under the new government, specifically to link the ideology of the Chinese Communist Party with human evolution. Peking Man was taught in educational books for all levels, pop science magazines and articles, museums, and lectures given in workspaces, including factories. This campaign was primarily to introduce the general populace (including those without advanced education) to Marxism, as well as to overturn widespread superstitions, traditions, and creation myths. Nonetheless, research was constricted as scientists were compelled to fit new discoveries within the frame of communism. In 1960, the Cenozoic Research Laboratory was converted into an independent organisation as the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP), a division of the Chinese Academy of Sciences, to better support excavation of Zhoukoudian. It was headed by Pei, Jia, and Chinese palaeoanthropologist Yang Zhongjian.
During the Cultural Revolution from 1966 to 1976, all intellectuals, including scientists, came under much persecution, and among other things were conscripted into manual labour as part of a campaign to turn "intellectuals into labourers and labourers into intellectuals", which impeded research. Though palaeoanthropology was still able to continue, the field became much less important to the Chinese government with its new resolve to become economically independent, and popular science topics switched from human evolution to production-related matters.
As the Revolution's policies relaxed, palaeoanthropology and academia resurged, especially with the rise of Deng Xiaoping in 1978 (renowned as a "springtime for science"). Zhoukoudian had been threatened several times by nearby mining operations or acid rain from air pollution, but the post-Mao China also witnessed a budding environmentalism movement. To this extent, the United Nations declared Zhoukoudian to be a World Heritage Site in 1987, and custody of the site was handed over from the IVPP to the city of Beijing (which has greater resources) in 2002.
The productivity of Zhoukoudian elicited strong palaeoanthropological interest in China, and 14 other H. erectus sites have since been discovered across the country as of 2016 in the Yuanmou, Tiandong, Jianshi, Yunxian, Lantian, Luonan, Yiyuan, Nanzhao, Nanjing, Hexian, and Dongzhi counties.
Age and stratigraphy
The Zhoukoudian Peking Man Site currently sits above sea level. The fossil-bearing sediments are divided into 27 localities, and Peking Man is known from Locality 1 ("Dragon Bone Hill"). This deep locality is further divided into 17 layers, of which fossils are found above Layer 13, and Peking Man from Layers 10–3. The fossil-bearing regions can also be organised into Loci A–O. Major stone tool accumulations occur in Layers 3 and 4, and the tops of Layers 8 and 10. The animal fossils in the locality suggest it dates to the Middle Pleistocene.
There have been many attempts to more finely tune the date of each layer, starting in the late 1970s. In 1985, Chinese scientist Zhao Shusen proposed the chronology: 700,000 years ago for Layer 13; 500,000 years ago for Layer 10; and 230,000 years ago for Layers 3. Though these general timeframes are normally agreed upon, the exact date of each layer is subject to intense discussion. In 2004, Shen Chengde and colleagues argued Layer 3 was deposited 400 to 500 thousand years ago, and Layer 10 as far back as about 600 to 800 thousand year ago, during a mild glacial period.
The earliest H. erectus fossils in all of China, Yuanmou Man, may date to 1.7 million years ago, though stone tools from the Shangchen site in Lantian, central China, could extend the occupation of the region to as far back as 2.12 million years ago.
Classification
Background
Despite what Charles Darwin had hypothesised in his 1871 Descent of Man, many late-19th century evolutionary naturalists postulated that Asia (instead of Africa) was the birthplace of humankind as it is midway between all continents via land routes or short sea crossings, providing optimal dispersal routes throughout the world. Among these was Ernst Haeckel who argued that the first human species (which he proactively named "Homo primigenius") evolved on the now-disproven hypothetical continent "Lemuria" in what is now Southeast Asia, from a genus he termed "Pithecanthropus" ("ape-man"). "Lemuria" had supposedly sunk below the Indian Ocean, so no fossils could be found to prove this. Nevertheless, Haeckel's model inspired Dutch scientist Eugène Dubois to join the Royal Netherlands East Indies Army and search for his "missing link" in Java. He found a skullcap and a femur (Java Man) which he named "P. erectus" (using Haeckel's hypothetical genus name) and unfruitfully attempted to convince the European scientific community that he had found an upright-walking ape-man; they dismissed his findings as some kind of malformed non-human ape.
In regard to the ancestry of Far Eastern peoples, racial anthropologists had long placed the origin of Chinese civilisation in the Near East, namely Babylon (Sino-Babylonianism) as suggested by French archaeologist Terrien de Lacouperie in 1894, whereby (abiding by historical race concepts) the Chinese peoples regressed compared to the superior races of Europe (degeneration theory). This came under fire by the time Peking Man was discovered, when China was in the midst of the New Culture Movement and surging nationalism subsequent to the fall of the Qing dynasty and the establishment of the Republic of China. These ideologies not only aimed to remove imperialistic influences, but also to replace ancient Chinese traditions and superstitions with western science to modernise the country, and lift its standing on the world stage to that of Europe.
"Out of Asia" theory
Unlike previously discovered extinct human species, notably the Neanderthal and Java Man, the Peking Man was readily accepted into the human family tree. In the West, this was aided by a popularising hypothesis for the origin of humanity in Central Asia, championed primarily by American palaeontologist Henry Fairfield Osborn and his apprentice William Diller Matthew. They believed that Asia was the "mother of continents" and the rising of the Himalayas and Tibet and subsequent drying of the region forced human ancestors to become terrestrial and bipedal. They also believed that populations which retreated to the tropics – namely Dubois' Java Man and the "Negroid race" — substantially regressed (again, degeneration theory). This required them to reject Sir Raymond Dart's far more ancient South African Taung child (Australopithecus africanus) as a human ancestor when he described it in 1925, favouring Charles Dawson's 1912 hoax "Piltdown Man" from Britain.
The Peking Man, with a brain volume much larger than living apes, was used to further invalidate African or European origin models. Peking Man's importance in human evolution was championed by Grabau in the 1930s, who (much like Osborn) pushed that the lifting of the Himalayas caused the emergence of proto-humans ("Protanthropus") in the Miocene, who then dispersed during the Pliocene into the Tarim Basin in Northwest China where they learned to control fire and make stone tools, and then went out to colonise the rest of the Old World where they evolved into "Pithecanthropus" in Southeast Asia, "Sinanthropus" in China, "Eoanthropus" (Piltdown Man) in Europe, and "Homo" in Africa (again abiding by degeneration theory). To explain the paucity of stone tools in Asia compared to Europe (an apparent contradiction if humans had occupied Asia for longer), he also stated that Pleistocene Central Asia was too cold to permit back-migration by early modern humans or Neanderthals until the Neolithic. The Central Asia model was the leading consensus of the time.
Peking Man became an important matter of national pride, and was used to extend the antiquity of the Chinese people and the occupation of the region to 500,000 years ago, with discussions of human evolution becoming progressively Sinocentric even in Europe. In the 1930s, Weidenreich already began arguing that Peking Man was ancestral to the "Mongoloid race", forwarding his polycentric hypothesis where local populations of archaic humans evolved into the local modern humans, as opposed to every modern population sharing an anatomically modern ancestor (polygenism). Other scientists working on the site made no such claims. The sentiment that all Chinese ethnic groups—including the Han, Tibetans, and Mongols—were indigenous to the area for such a long time became more popular during the Second Sino-Japanese War and the occupation of China by Japan. By the Mao era, Peking Man was ubiquitously heralded as a human ancestor in China.
"Sinanthropus"
Black hastily classified the Peking Man material in 1927 as a new genus and species as "Sinanthropus pekinensis" based on only three teeth. Initially, palaeoanthropologists assumed that expansion of the braincase was the first major innovation in human evolution away from apes. Consequently, because he characterised Peking Man as a human ancestor, Black initially believed that Peking Man would be more similar to Piltdown Man (with a big brain and modern skullcap but an apelike jaw) than Java Man (which at the time was characterised as a giant gibbon by Dubois). When the first Peking Man skullcap was discovered in 1929, Black and his mentor Sir Grafton Elliot Smith noted "a curious blend of characters" between Peking Man, Java Man, and Piltdown Man. They were unsure how to resolve these relationships.
Weidenreich, on the other hand, (correctly) dismissed Piltdown Man as a chimaera of a modern human skull and an orangutan jaw in 1923, and also began arguing that Java Man was an ancient human instead of a gibbon. Already in 1935, he claimed the differences between Peking Man and Java Man, "can be due at most to racial variation". Following German-Dutch palaeontologist Gustav Heinrich Ralph von Koenigswald's further Java Man discoveries in Mojokerto and Sangiran, von Koenigswald and Weidenreich declared in a 1939 paper that Java Man and Peking Man are, "related to each other in the same way as two different races of present mankind, which may also display certain variations in the degree of their advancement."
To this end, in 1940 Weidenreich also suggested that, if Peking Man ("Sinanthropus pekinensis") and Java Man ("Pithecanthropus erectus") are ancestral to different modern human populations (classified into several subspecies of Homo sapiens), then they should be subsumed under Homo as subspecies of the same pre-modern species as H. erectus pekinensis and "H. e. javanensis", respectively. Nonetheless, Weidenreich continued using "Sinanthropus" (and "Pithecanthropus") until his death in 1948 because he saw it, "just as a name without any 'generic' or 'specific' meaning, or in other words, as a 'latinization' of Peking Man." In 1945, British anatomist Sir Wilfrid Le Gros Clark argued that, per nomenclature codes, the correct name should be "Pithecanthropus pekinensis". Still, especially after The Holocaust, Weidenreich and many of his colleagues desired to reform anthropology away from its fixation on racial distinctness and purity. Weidenreich discussed the application of the burgeoning field of genetics in physical anthropology with namely Theodosius Dobzhansky and Sherwood Washburn as modern evolutionary synthesis was being formulated.
In 1950, German-American evolutionary biologist Ernst Mayr had entered the field of anthropology, and, surveying a "bewildering diversity of names," decided to subsume human fossils into three species of Homo: "H. transvaalensis" (the australopithecines), H. erectus (including "Sinanthropus", "Pithecanthropus", and various other putative Asian, African, and European taxa), and H. sapiens (including anything younger than H. erectus, such as modern humans and Neanderthals), as had been broadly recommended by various priors. He classified Peking Man as H. e. pekinensis. Mayr defined these species as a sequential lineage, with each species evolving into the next (chronospecies). Though later Mayr changed his opinion on the australopithecines (recognising Australopithecus), his more conservative view of archaic human diversity became widely adopted in the subsequent decades. Thus, Peking Man was considered a human ancestor in both Western and Eastern thought. Nonetheless, Chinese and Soviet scientists wholly denounced polygenism, viewing it as scientific racism propagated by Western capitalist scholars (racial capitalism). They instead argued all modern human races are closely related to each other.
"Out of Africa" theory
The contributions of Chinese scientists during the Mao era were under much suspicion in the West for fears of propagandic contamination. In the 60s and 70s, the position of the more ancient Australopithecus in human evolution once again became a centre of debate; in China, Wu Rukang argued that Australopithecus was the "missing link" between apes and humans, but was met with much derision from Chinese peers. Following the "opening" of China with the rise of Deng in 1978, Western works contradictory to Maoist ideology disseminated through China, radically altering Eastern anthropological discussions. In the late 20th century, human evolution had become Afrocentric with the gradual acceptance of Australopithecus as human ancestors, and consequent marginalisation of Peking Man, especially as older fossils of H. erectus were being unearthed in Africa, first by Kenyan archaeologist Louis Leakey in 1960 with Olduvai Hominin 9. H. erectus is now largely considered to have evolved in Africa and later spread to other continents.
To counter the declining interest in Eastern palaeoanthropology, many Chinese scientists commonly pushed Sinocentric and often polygenic arguments, forwarding the antiquity of racial distinctness before the evolution and dispersal of modern humans, and racial continuity between local H. erectus and modern descendent races (for example, "typically 'Mongoloid' features" such as shovel-shaped incisors carried over from Peking Man to modern Chinese). They often cited the 2 million year old Wushan Man from central China, which is no longer classified as a human, and asserted several Chinese apes millions of years old were human ancestors. Jia proposed the earliest human species evolved on the Tibetan Plateau, and the adjacent Guizhou Province was another popularly proposed genesis point. Various late Middle Pleistocene Chinese specimens have been argued, namely by Chinese palaeoanthropologist Wu Xinzhi, to represent hybrid populations between Peking Man and the ancestors of modern humans, such as the Dali Man or the Jinniushan Man. In the 1970s, the travelling museum exhibit "The Exhibition of Archaeological Finds of the People’s Republic of China" — organised by the CCP to tour around Western Europe, the US, and Canada — painted Peking Man and Lantian Man as the "forefathers of the Chinese people", playing a central role in the story of human evolution, and emphasising the antiquity of the Chinese people. Additionally, at least since the mid-1990s, the CCP has utilised Peking Man as an instrument of its racial nationalist discourse.
Peking Man's ancestral position is still widely maintained among especially Chinese scientists using the assimilation model, wherein archaic humans such as Peking Man interbred with and were effectively absorbed into modern human populations in their respective locations (so according to this, Peking Man has lent some ancestry to modern Chinese populations). On this matter, palaeogenetic analyses — the first in 2010 — have reported that all humans whose ancestry lies beyond Sub-Saharan Africa contain genes from the archaic Neanderthals and Denisovans indicating early modern humans interbred with archaic humans. The common ancestor of Neanderthals and Denisovans in turn interbred with another archaic species even farther removed from modern humans. Still, East Asian H. erectus from China and Indonesia are now usually characterised as relict populations which had little interaction with Western H. erectus or later Homo species.
Phylogeny
Many Chinese H. erectus fossils were given a unique subspecies name based on minute anatomical differences, at a time when different modern human races were classified into different subspecies for similar reasons. As the definition of "subspecies" tightened in the late 20th century, it became impossible to justify all of these names. In general, subspecies names for H. erectus are now used for convenience to indicate time and region rather than specific anatomical trends. The name H. e. pekinensis may extend to all Chinese H. erectus but is usually used to refer only to Zhoukoudian.
The anatomy of Chinese H. erectus specimens varies regionally and over time, but this variation is subtle and difficult to assess given how fragmentary H. erectus remains are both in and out of China. Northern Chinese specimens (namely Peking Man and Nanjing Man) are distinct in the narrowness of the skull, but H. erectus skull shape is poorly documented elsewhere in China. Some authors suggested that the anatomical peculiarities of the Zhoukoudian specimens indicate speciation rather than a geographic cline, and consider Peking Man as a separate species, H. pekinensis.
H. erectus may have made multiple different dispersals out of Africa to the Far East, with the population represented by the Indonesian Sangiran site possibly being more closely related to Western H. erectus than to Peking Man. A population related to Peking Man may have later interbred with Southeast Asian H. erectus, since the younger teeth at Sangiran are much smaller than the older ones — more like those of Peking Man's — but tooth reduction could have happened for other reasons.
A 2021 phylogeny of H. erectus using tip dating:
Anatomy
Peking Man is known from 13 skull and cranial fragments, 15 mandibles (lower jawbone), 157 isolated and in situ teeth, an atlas (the first neck vertebra), a clavicle, 3 humeri (upper arm bones), potentially 2 iliac fragments (the hip), 7 femora, a tibia (shinbone), and a lunate bone (a wrist bone). The material may represent as many as 40 individuals.
Peking Man and anatomically similar East Asian contemporaries are sometimes referred to as "classic" H. erectus.
Skull
In 1937, Weidenreich and his assistant, the sculptor Lucile Swan, attempted to reconstruct a complete skull, but only considered a skullcap (Skull XI), a left maxillary (upper jaw) fragment (Skull XII/III), and a right mandibular fragment, which are presumably specimens of females based on their smaller size. Though larger, presumably male specimens are much more numerous, they probably chose female specimens because a male maxilla would not be discovered until 1943. Swan also made a lifelike bust of Peking Man based on this skull, nicknamed "Nellie".
In 1996, anthropologists Ian Tattersall and Gary Sawyer revised the skull with high-quality casts of six presumed-male specimens and three isolated teeth (as the original fossils were lost). With this extended sample, virtually the entire skull could be more accurately restored, except the bottom margin of the piriform aperture (the nose hole). They deflated the cheeks and inflated the lateral margins of the brow ridge, which caused the nose to project out even farther (increased midfacial prognathism), though they reduced subnasal prognathism. Overall, compared to that of Weidenreich and Swan, their reconstruction is less apomorphic (specialised) than other Asian or African H. erectus specimens.
Cranial vault
Weidenreich characterised the Peking Man skull as relatively low, ellipsoid, and long. The breadth is greatest at the ears but decreases frontwards, especially at the forehead. There is marked post-orbital constriction, and the skull is circumscribed by a bony torus which is strongest at the brow ridge (supraorbital torus) and at the back of the skull (occipital torus). All specimens have an eminence projecting just above the supraorbital torus, developed to varying degrees, which is not found in any other H. erectus population. The frontal sinuses are restricted to the nasal area below the brows, and consequently the supraorbital torus is completely solid, unlike that of Java Man. The eye sockets are wide. The superior orbital fissure in the eye socket was probably a small opening like in non-human apes rather than a long slit like in modern humans. The nasal bones between the eyes are double the width of those of the average modern human, though not as wide as those of Neanderthals. Weidenreich suggested Peking Man had a short, broad nose.
Peking Man also features a sagittal keel running across the midline, highest when it intersects the coronal suture halfway across, and recedes around the obelion (near the base of the parietal bones). All skulls feature an equally developed keel (proportionally), including subadult and presumed-female specimens (there are no infant specimens). The keel produces a depression on either side, which accentuates the parietal eminence. The temporal lines, which arc in pairs across either side of the skull, often merge into a single ridge near the skull midline. The squamous part of temporal bone (the flat region) is positioned quite low, and the temporal fossa (the depression between the temporal lines and cheek) is relatively narrow. The mastoid part of the temporal bone features a high crest above which overhangs the ear canal. The crest accentuates the mastoid process, which bends inwards as opposed to the modern human condition of being vertical; bending is much more pronounced in presumed-male specimens. Peking Man lacks a true postglenoid process (a bony projection behind the jaw hinge); instead of being elongated, it is merely a low, triangular projection with a broad base. The zygomatic bones (cheekbones) project far off the face, and would have been visible when viewing the skull from the top. They project as far as , whereas modern humans do not exceed .
At the back of the skull, the occipital torus extends in a relatively straight line, but curves downward at the sides of the skull. The occipital torus can be bordered by furrows (sulci) on the top and bottom margins (for muscle attachment), and the bottom margin of the torus gradually fades. The midpoint of the torus features an additional prominence, the occipital bun.
Brain
The brain capacities of the seven Peking Man skulls for which the metric is measurable range from 850 to 1,225 cc, with an average of about 1,029 cc. This is within the range of variation for modern humans. Asian H. erectus overall are rather big-brained, averaging roughly 1,000 cc.
The endocast (the cast of the inside of the braincase) is ovoid in top-view. Due to post-orbital constriction, the frontal lobe is narrowed like in other H. erectus. The parietal lobes are depressed unlike Javan and African H. erectus or modern humans, though this seems to be somewhat variable among the Peking Man material. The temporal lobes are narrow and slender unlike most other human species. The occipital lobes are flattened dorsoventrally (from top to bottom) and strongly project backwards which is a rather variable trait among archaic human populations. The cerebellum, compared to that of modern humans, is not as globular, and the lobes diverge more strongly from the midline like other archaic humans.
Mouth
Peking Man has remarkably defined canine juga (a bony ridge corresponding to the tooth root). There is subnasal prognathism (the upper jaw juts out). The upper jaw commonly features exostoses (bony lumps) in the molar region, which infrequently occurs in modern humans (>6%). Like modern humans and Neanderthals but unlike Java Man, Peking Man has a long, rugose palate (roof of the mouth). The mandible is rather big and, like other archaic humans, lacks a chin. The extramolar sulci bordering the cheek side of the molars are broad. Some mandibles feature a torus on the tongue side, or multiple mental foramina.
The dental arches (tooth rows) are U-shaped. The incisors feature an eminence at the base, finger-like ridges on the tongue-side, and for the upper ones marked shovelling (the tooth strongly bends in). The mandibular incisors are narrow. Weidenreich originally restored the teeth as peg-like, but Tattersall and Sawyer found the teeth to be much larger and obtrusive. Like other H. erectus, the premolars are ellipse-shaped and asymmetrical, but the first premolar (P3) frequently has three roots instead of the more common two. The molar crowns exhibit several extraneous ridges in addition to the essential cusps, which produced a dendritic (branching) enamel-dentine junction, which has only been documented in Chinese H. erectus. M1 is rather long, and M2 is round.
The upper incisors of Peking Man and other Chinese H. erectus feature marked shovelling, more prominent than in other H. erectus populations. Shovelling also usually occurs in Neanderthals and less intensely in many early modern human specimens across Europe, Africa, and Asia.
Postcranium
Because the archaeological record of East Asia is comparatively poor, the post-cranial anatomy of H. erectus is largely based on the adolescent African specimen Turkana Boy, as well as a few other isolated skeletons from Africa and Western Eurasia.
Externally, the Peking Man humerus is like that of modern humans, and exhibits exceptionally developed muscle attachments, but the shaft is more slender. The lunate bone (in the wrist) is modern humanlike, though proportionally small and broad.
Compared to an average modern human, the femur is much stouter, flatter, slenderer, and straighter (and maximum curvature occurs nearer the knee joint instead of at the mid-shaft). The anteposterior (from front to back) diameter is smaller than the transverse (from left to right) diameter. The femoral neck was probably truncated like in other archaic humans and non-human apes. The subtrochanteric crest terminates up at the greater trochanter with a bony growth, commonly exhibited in Neanderthals. These traits are not outside the range of variation for modern humans, though are quite rare.
Body size
The torso is poorly known, but because the limbs and clavicle are proportionally like those of modern humans, it is typically assumed the rest of the body was as well. Working under this assumption, living body dimension reconstructions include:
In 1938, Weidenreich reconstructed a presumed-female femur to be in length in life, which would equate to a female height of . He speculated males averaged .
In 1944, Weidenreich reconstructed a presumed-male femur to be long, equating to a male height of . He speculated an average female height of .
In 2018, Chinese palaeoanthropologist Song Xing estimated the living weight for Humeri II and III as about , Femur I , Femur IV , and Femur VI . Weidenreich assumed all these represent males.
Overall, northerly H. erectus populations tend to be shorter than tropical populations, with colder climate populations including Zhoukoudian and Dmanisi averaging roughly , and hotter climate populations including African and Javan H. erectus .
Bone thickness
The strongly developed tori and crests greatly fortify the skull, and the braincase is extremely thickened like in other H. erectus. Similar thickening can also rarely occur in modern humans when the diploë (the spongy cancellous layer between the two hard cortical layers of bone in the skull) abnormally expands, but for Peking Man, all three layers of cranial bone have equally thickened.
The long bones of all H. erectus have thickened cortical bone and consequently narrowed medullary cavities (where the bone marrow is stored). Peking Man has much thicker humeri than African H. erectus. At maximum constriction at the mid-shaft, the femoral walls of Peking Man take up about 90% of the interior space, as opposed to only 75% in modern humans. For the lateral walls (towards the sides), the exorbitant thickness sharply reduces above the greater trochanter, whereas the medial walls (towards the middle) are three times as thick as those of modern humans at that point. In modern humans, the femoral head features two main strips of cancellous bone (spongy interior bone) that converge into a triangle (Ward's triangle), which is absent in Peking Man, likely due to the intense thickening of the cortical bone.
In 1946, Weidenreich forwarded an unpopular hypothesis that Peking Man (and Java Man) inherited the thick bones from gigantic ancestors (plesiomorphy), evidenced by von Koenigswald's enormous Meganthropus and Gigantopithecus, which at the time were classified as ancient human ancestors. Other explanations include a far more violent and impact-prone lifestyle than other Homo, or pathological nutrient deficiencies causing hyperparathyroidism (such as hypocalcemia).
Gallery
Culture
Palaeoenvironment
The mammal assemblage indicates three major environmental units: Layers 11–10 a cold and dry, predominantly grassland environment; Layers 9–5 a warm, predominantly forested environment; and Layers 4–1 another cold and dry, predominantly grassland environment.
The mammal assemblage includes macaques, the Zhoukoudian wolf, the Asian black bear, brown bear, the rhino Dicerorhinus choukoutienensis, the woolly rhinoceros, the horse Equus sanmeniensis, the Siberian musk deer, the giant deer Sinomegaceros pachyosteus, sheep, bison, the Asian straight-tusked elephant, bats, pika, rodents, and shrews. The mammal assemblage of Layers 4–3 is broadly similar to that of Layers 9–8, in addition to several warm-to-mild climate steppe and forest creatures, including the raccoon dog Nyctereutes sinensis, the dhole Cuon antiquus, the corsac fox, the Asian badger, wolverines, the giant hyena Pachycrocuta, the saber-toothed cat Machairodus inexpectatus, the tiger, the leopard, sika deer, the antelope Spirocerus peii, and the water buffalo Bubalus teilhardi. The Zhoukoudian fauna are not entirely exclusive to either glacial or interglacial periods.
H. erectus seems to have typically favoured open environments. It is debated if Peking Man occupied the region during colder glacial periods or only took residence during warmer interglacials, tied to the uncertain chronology of Zhoukoudian, as well as arguments regarding fire usage, clothing technology, and hunting ability. Given the abundance of deer remains, it was quite early on assumed Peking Man was a prolific deer hunter, but since the establishment of non-human carnivores as a major depositional agent, the dependence on hunting has become a controversial topic. Indeed, most of the Peking Man fossils were at least fed upon by likely hyenas. Nonetheless, some of the animal fossils seem to have been modified by humans. In 1986, American archaeologist Lewis Binford and colleagues reported a few horse fossils with cutmarks left by stone tools, and two upper premolars from Layer 4 appearing to him to have been burned while still fresh, which he ascribed to horse-head roasting. Binford believed that Peking Man was simply scavenging from hyenas because all tool cuts he analysed were always overlapping hyena gnaw marks, instead of vice versa. Zhoukoudian also preserves the remains of edible plants, nuts, and seeds which Peking Man may have been eating: Chinese hackberry, walnut, hazelnut, pine, elm, and rambler rose.
H. erectus, a specialist in woodland and savannah biomes, likely went extinct with the takeover of tropical rainforests. From Marine Isotope Stages 12–10, (roughly 500 to 340 thousand years ago), the Chinese archaeological record becomes dominated by "late-archaic" non-erectus fossils, potentially representing multiple species including the Denisovans. Peking Man's final stay at Zhoukoudian may have taken place sometime between 400,000 and 230,000 years ago, though a more exact time interval is difficult to arrive at.
Occupation of the cave
Because human remains (encompassing males, females, and children), tools, and potential evidence of fire were found in so many layers, it has often been assumed Peking Man lived in the cave for hundreds of thousands of years.
In 1929, French archaeologist Henri Breuil suggested the overabundance of skulls compared to body remains is conspicuous, and hypothesised the remains represent the trophies of cannibalistic headhunters, either a band of H. erectus or a more "advanced" species of human. In 1937, French palaeoanthropologist Marcellin Boule believed the Peking Man brain was insufficiently evolved for such behaviour, based on the brain size, and suggested the skulls belonged to a primitive species and the limbs to a more evolved one, the latter manufacturing stone tools and cannibalising the former. Weidenreich did not believe brain size could be a dependable measure of cultural complexity, but, in 1939, he detailed the pathology of the Peking Man fossils and came to the conclusion of cannibalism or headhunting. The majority of the remains bear evidence of scars or injuries which he ascribed to attacks from clubs or stone tools; all the skulls have broken-in bases which he believed was done to extract the brain; and the femora have lengthwise splits, which he supposed was done to harvest the bone marrow.
Weidenreich's sentiments became widely popular. Another school of thought, proposed by Pei in 1929, held that individuals were dragged in by hyenas. In 1939, pioneering the field of taphonomy (the study of fossilisation), German palaeontologist highlighted parallels between the Zhoukoudian fossils and cow bones gnawed by hyenas he studied at Vienna Zoo. Weidenreich subsequently conceded in 1941 that the breaking-off of the epiphyses of long bones is most likely due to hyena activity, but he was unconvinced that hyenas broke open the skull base or were capable of creating the long splits in the robust femora, still ascribing those to stone-tool-wielding cannibals. In addition to carnivore damage, Skull V bears a lesion on the right brow consistent with non-fatal blunt force trauma, which could have been caused by a human attack, or some accidental bump or fall.
By the mid-20th century, the hypothesis that Peking Man inhabited the cave once again became the mainstay, modeled around Jia's 1975 book The Cave Home of Peking Man. In 1985, Binford and Chinese palaeoanthropologist Ho Chuan Kun instead hypothesised that Zhoukoudian was a "trap" which humans and animals fell into. They further proposed that deer remains, earlier assumed to have been Peking Man's prey, were instead predominantly carried in by the giant hyena Pachycrocuta; and ash was deposited by naturally occurring wildfires fueled by bat guano, as they did not believe any human species had yet mastered hunting or fire at this time. In 2001, American geologist Paul Goldberg, Israeli archaeologist Steve Weiner, and colleagues determined that there is no evidence of any fire or ash at all at Zhoukoudian.
In 2000, American anthropologist Noel T. Boaz and colleagues argued the state of the bones is consistent with general hyena biting, gnawing, and bone-crunching, and suggested that Pachycrocuta — the largest known hyena to have ever lived — was more than capable of splitting robust bones, contrary to Weidenreich. They identified bite marks on 67% of the Peking Man fossils (28 specimens), and attributed this and all other perimortem (around the time of death) damage to hyenas. Boaz and colleagues conceded that stone tools must indicate human activity in (or at least near) the cave, but, with few exceptions, tools were randomly scattered across the layers (as mentioned by several previous scientists), which Goldberg and colleagues ascribed to bioturbation. This means that the distribution of the tools gives no indication of the duration of human habitation. In 2016, Shuangquan Zhang and colleagues were unable to detect significant evidence of animal, human, or water damage to the few deer bones collected from Layer 3, and concluded they simply fell into the cave from above. They noted taphonomic debates are nonetheless still ongoing. Indeed, the fire debate is still heated, with Chinese palaeoanthropologist Xing Gao and colleagues declaring "clear-cut evidence for intentional fire use" in Layer 4 in 2017, echoed by Chinese palaeoanthropologist Chao Huang and colleagues in 2022.
Society
During the Mao era, the dissemination of communist ideology among the general populace was imperative. The prospect of "labour created humanity" by prominent communist Friedrich Engels in his 1876 essay "The Part Played by Labour in the Transition from Ape to Man" became central to Chinese anthropology, and was included in almost any discussion regarding human evolution — including educational media for laypersons. Engels supposed that walking upright instead of on all-fours as other apes do freed the hands for labour, facilitating the evolution of all characteristically human traits, such as language, cooperation, and most importantly the growth of brain size to "perfection," stating, "the hand is not only the organ of labour, it is also the product of labour." Therefore, labour stimulates intelligence, detected in the archaeological record with stone tools.
As for the society of these ancient humans, including Peking Man, Engel's 1884 book The Origin of the Family, Private Property and the State and his concept of primitive communism became the mainstay. Engels had largely based it on American ethnologist Lewis H. Morgan's 1877 book Ancient Society detailing Morgan's studies on "primitive" hunter-gatherer societies, namely the Iroquois. In the Mao era, Peking Man was consequently often painted as leading a dangerous life in the struggle against nature, organised into simple, peaceful tribes which foraged, hunted, and made stone tools in cooperative groups. As for gender roles, Peking Man society was most often described as "men hunt and women gather."
To the West, emphasis was usually placed on intelligence rather than labour, especially after English primatologist Jane Goodall discovered chimpanzees could make tools in 1960 (i.e., the labour of tool manufacturing is not unique to humans). Nonetheless, popular Western and Eastern interpretations of ancient humans at this time converged greatly. In China, the influence of "labour created humanity" as well as Engel's rhetoric waned after the rise of Deng with the dissemination throughout China of Western research and theories contradictory to Maoist ideology, particularly after 1985, though labour was still regarded as an important adaption. By this time, the concept of labour had expanded from purely manual to also intellectual work; a sense of aesthetics was instead heralded as a uniquely human trait.
Consistent with other prehistoric human populations, Peking Man had a rather short average lifespan. Out of a sample of 38 individuals, 15 died under the age of 14 years (39.5%), 3 died around 30 years (7%), 3 died from 40 to 50 years (7%), and 1 at 50 to 60 years (2.6%). The ages of the remaining 16 individuals (43%) could not be determined.
Stone tools
Despite Zhoukoudian being one of the most productive sites for East Asian stone tools, the IVPP prioritised human and animal fossils. Archaeological research stalled. This strongly contrasts with the rest of the world, especially Europe, where tools and manufacturing techniques have been categorised even on regional levels. Consequently, China's Lower Palaeolithic record has generally been viewed as stagnant. Nonetheless, markers of broader periods in the West are conspicuously rare in the East, most notably hand axes characteristic of the Acheulean culture (typically associated with western H. erectus and H. heidelbergensis) or the Levallois technique of the Mousterian culture (typically Neanderthals). The apparent technological divide inspired American archaeologist Hallam L. Movius to draw the "Movius Line" in 1948, dividing the East into a "chopping-tool culture" and the West into a "hand axe culture".
Though this is not well supported anymore with the discovery of some hand axe technology in Middle Pleistocene East Asia, hand axes are still conspicuously rare and crude compared to western contemporaries. This has been variously explained as:
the Acheulean was invented in Africa after human dispersal through East Asia, but this would require that the two populations remained separated for nearly two million years;
East Asia had poorer quality raw materials — namely quartz and quartzite — but hand axes made of these materials have been found in some Chinese localities, and East Asia is not completely void of higher-quality rock;
East Asian H. erectus used biodegradable bamboo instead of stone for chopping tools, but this is difficult to test;
or East Asia had a lower population density, leaving few tools behind in general, but demography is difficult to approximate in the fossil record.
Locality 1 at Zhoukoudian has produced more than 100,000 lithic pieces. A great chunk of these pieces appears to be wastage.
The tool assemblage is otherwise characterised by mainly large, dull choppers and simple, sharp flakes. Similarly, modified animal fossils at Zhoukoudian usually exhibit battering or cutting. Peking Man also rarely manufactured scrapers and (towards the later end of occupation) retouched tools such as points and potentially burins, as suggested by Breuil, but Pei and Movius believed his supposed burins were too crude to have been produced intentionally. Brueil also postulated that Peking Man predominantly relied on bone tools made of prey animals' antlers, jaws, and isolated teeth, but this idea did not receive wide support. Many of his supposed bone flakes could easily be ascribed to hyena activity.
In 1979, to highlight technological evolution, Pei and Zhang partitioned the Zhoukoudian industry into three stages:
the early stage typified by the simple hammer and anvil technique (slamming the core against a rock) which produced large flakes namely from soft materials such as sandstone, weighing up to and measuring from Layer 11;
the middle stage typified by the bipolar technique (smashing the core into several flakes with a hammerstone, out of which at least a few should be the correct size and shape) which made smaller flakes up to in weight and in length;
and the late stage above Layer 5 typified by even smaller flakes made with harder and higher quality quartz and flint among other cobble. Quartz had to be collected some distance from the cave from local granite outcrops by the hills and riverbed.
These techniques produced unstandardised tools, and Binford was skeptical of any evidence of cultural evolution at all.
The debate as to whether Peking Man was the first human species to manufacture tools fleshed out in the early 1960s in the period of relative stability between the Great Leap Forward and the Cultural Revolution. The argument centred around whether Zhoukoudian tools were the most primitive and therefore the earliest tools (i. e., Peking Man is the most ancient human) championed by Pei, or if there were even more primitive and as of yet undiscovered tools (i. e., Peking Man is not the most ancient human) championed by Jia. In Western circles, Leakey had already reported an apparent pebble industry in Olduvai Gorge, Tanzania, in 1931, the first hard (albeit, controversial) evidence of a culture more primitive than the Acheulean. Radiometric dating in the 1960s established the Oldowan as the oldest known culture at 1.8 million years old.
Productive contemporaneous Chinese stone tool sites include Xiaochangliang (similar to Zhoukoudian), Mount Jigong, Bose Basin (which produced large tools often in excess of 10 cm, or 4 in), Jinniushan, Dingcun, and Panxian Dadong.
Fire
In 1929, Pei oversaw the excavation of Quartz Horizon 2 (Layer 7, Locus G) of Zhoukoudian, and reported burned bones and stones, ash, and redbud charcoal, which was interpreted as evidence of early fire usage by Peking Man. The evidence was widely accepted. Further excavation in 1935 of Layers 4–5 revealed more burned stones, ash, and hackberry seeds. Ash was deposited in horizontal and vertical patches, reminiscent of hearths.
In 1985, Binford and Ho doubted Peking Man actually inhabited Zhoukoudian, and asserted the material was burned by naturally occurring fires fueled by guano; though, the next year, Binford interpreted burned horse teeth as evidence of horse-head roasting. In 1998, Weiner, Goldberg, and colleagues found no evidence of hearths or siliceous aggregates (silicon particles, which form during combustion) in Layers 1 or 10; they therefore concluded the burned material was simply washed into the cave rather than being burned in the cave. The IVPP immediately responded, and, in 1999, Wu Xinzhi argued Weiner's data was too limited to reach such conclusions. In 2001, Goldberg, Weiner, and colleagues concluded the ash layers are reworked loessic silts, and blackened carbon-rich sediments traditionally interpreted as charcoal are instead deposits of organic matter left to decompose in standing water. That is, there is no evidence of ash or fire at all.
Nonetheless, in 2004, Shen and colleagues reported evidence of a massive fire at Layer 10 — ostensibly as old as 770,000 years ago, during a glacial period — and asserted Peking Man needed to control fire so far back in time in order to survive such cold conditions. In 2014, Chinese anthropologist Maohua Zhong and colleagues reported elements associated with siliceous aggregates in Layers 4 and 6, and they also doubted the validity of Weiner's analysis of Layer 10. Similarly, in 2017, Gao and colleagues reported "clear-cut evidence of fire usage" in Layer 4 with some evidence of manmade hearths which, based on magnetic susceptibility and colour, may have been heated to over . In 2022, Huang and colleagues also determined that at least 15 bones from Layer 4 (based on colour) were heated to above inside the cave, consistent with a campfire (or a prolonged wildfire, which they considered less likely inside a cave).
Elsewhere, evidence of fire usage is scarce in the archaeological record until 400 to 300 thousand years ago, which is generally interpreted as fire not being an integral part of life until this time, either because they could not create or well-maintain it.
| Biology and health sciences | Homo | Biology |
253350 | https://en.wikipedia.org/wiki/Old%20World%20porcupine | Old World porcupine | The Old World porcupines, or Hystricidae, are large terrestrial rodents, distinguished by the spiny covering from which they take their name. They range over the south of Europe and the Levant, most of Africa, India, and Southeast Asia as far east as Flores. Although both the Old World and New World porcupine families belong to the infraorder Hystricognathi of the vast order Rodentia, they are quite different and are not particularly closely related.
Characteristics
Old World porcupines are stout, heavily built animals, with blunt, rounded heads, fleshy, mobile snouts, and coats of thick cylindrical or flattened spines, which form the whole covering of their bodies, and are not intermingled with ordinary hairs. The habits of most species are strictly terrestrial. They vary in size from the relatively small long–tailed porcupine with body lengths of , and a weight of , to the much larger crested porcupines, which are long, discounting the tail, and weigh from .
The various species are typically herbivorous, eating fruit, roots, and bulbs. Some species also gnaw on dry bones, perhaps as a source of calcium. Like other rodents, they have powerful gnawing incisors, and no canine teeth. Their dental formula is . The prominent diastema allows the lips to be drawn inwards while gnawing. Similar to other hystricomorphs, their chewing muscles are unique. Through an arm of the masseter muscles, passing through the infraorbital foramen, chewing movements are very efficient.
One or two (or, rarely, three) young are born after a gestation period between 90 and 112 days, depending on the species. Females typically give birth only once a year, in a grass-lined underground chamber within a burrow system. The young are born more or less fully developed, and the spines, which are initially soft, harden within a few hours of birth. Although they begin to take solid food within two weeks, they are not fully weaned until 13 to 19 weeks after birth. The young remain with the colony until they reach sexual maturity at around two years of age, and share the burrow system with their parents and siblings from other litters. Males, in particular, help defend the colony from intruders, although both sexes are aggressive towards unrelated porcupines.
These rodents are also characterized by the imperfectly rooted cheek-teeth, imperfect clavicles or collar-bones, cleft upper lip, rudimentary first front-toes, smooth soles, six teats arranged on the side of the body, and many cranial characters.
Species
Of the three genera, Hystrix is characterized by an inflated skull, in which the nasal cavity is often considerably larger than the brain case, and a short tail, tipped with numerous slender-stalked open quills, which make a rattling noise whenever the animal moves. When threatened, most porcupines will wag their tails, making a louder rattling noise to scare off predators. The African brush-tailed porcupine (A. africanus) will simultaneously raise sharp quills, 40 cm (16 inches) in length, on its back and sides.
The crested porcupine (Hystrix cristata), a typical representative of the Old World porcupines, occurs throughout the south of Europe and North and West Africa. It is replaced in southern and central Africa by the Cape porcupine, H. africaeaustralis, and in India by the Malayan porcupine (H. brachyura) and Indian (crested) porcupine (H. indica). The latter also lives throughout the Middle East.
Besides these large-crested species, several smaller species without crests occur in northeast India, and the Malay region from Nepal to Borneo.
The genus Atherurus includes the brush-tailed porcupines which are much smaller animals, with long tails tipped with bundles of flattened spines. One species is found in the Malay region and one in Central and West Africa. The latter species, the African brush-tailed porcupine, is often hunted for its meat.
Trichys, the last genus, contains one species, the long-tailed porcupine (T. fasciculata) of Borneo. This species is externally very similar to Atherurus, but differs from the members of that genus in many cranial characteristics.
Fossil species are also known from Africa and Eurasia, with one of the oldest being Sivacanthion from the Miocene of present-day Pakistan. However, it was probably not a direct ancestor of modern porcupines.
Species list
The extant species and fossil genera are:
Family Hystricidae
Hystrix
Subgenus Acanthion
Malayan porcupine (H. brachyura)
Sunda porcupine (H. javanica)
Subgenus Hystrix
Cape porcupine (H. africaeaustralis)
Crested porcupine (H. cristata)
Indian porcupine (H. indica)
Subgenus Thecurus
Thick-spined porcupine (H. crassispinis)
Philippine porcupine (H. pumila)
Sumatran porcupine (H. sumatrae)
†Hystrix arayanensis
†Hystrix depereti
†Hystrix paukensis
†Hystrix primigenia
†Hystrix refossa
†Miohystrix
†Xenohystrix
†Sivacanthion
Atherurus
African brush-tailed porcupine (A. africanus)
Asiatic brush-tailed porcupine (A. macrourus)
Trichys
Long-tailed porcupine (T. fasciculata)
| Biology and health sciences | Rodents | Animals |
253410 | https://en.wikipedia.org/wiki/Dickinsonia | Dickinsonia | Dickinsonia is a genus of extinct organism, most likely an animal, that lived during the late Ediacaran period in what is now Australia, China, Russia, and Ukraine. It is one of the best known members of the Ediacaran biota. The individual Dickinsonia typically resembles a bilaterally symmetrical ribbed oval. Its affinities are presently unknown; its mode of growth has been considered consistent with a stem-group bilaterian affinity, though various other affinities have been proposed. It lived during the late Ediacaran (final part of Precambrian). The discovery of cholesterol molecules in fossils of Dickinsonia lends support to the idea that Dickinsonia was an animal, though these results have been questioned.
Description
Dickinsonia fossils are known only in the form of imprints and casts in sandstone beds. The specimens found range from a few millimetres to about in length, and from a fraction of a millimetre to a few millimetres thick. They are nearly bilaterally symmetric, segmented, round or oval in outline, slightly expanded to one end (i.e. egg-shaped outline). The rib-like segments are radially inclined towards the wide and narrow ends, and the width and length of the segments increases towards the wide end of the fossil. The body is divided into two by a midline ridge or groove, except for a single unpaired segment at one end, dubbed the "anterior most unit" suggested to represent the front of the organism. It is disputed whether the segments are offset from each other following glide reflection, and are thus isomers, or whether the segments are symmetric across the midline, and thus follow true bilateral symmetry, as the specimens displaying the offset may be the result of taphonomic distortion. The number of segments/isomer pairs varies from 12 in smaller individuals to 74 in the largest Australian specimens.
The body of Dickinsonia is suggested to have been sack-like, with the outer layer being made of a resistant but unmineralised material. Some specimens from Russia show the presence of branched internal structures. Some authors have suggested that the underside of the body bore cilia, as well as infolded pockets.
Dickinsonia is suggested to have grown by adding a new pair of segments/isomers at the end opposite the unpaired "anterior most unit". Dickinsonia probably exhibited indeterminate growth (having no maximum size), though it is suggested that the addition of new segments slowed down later in growth. Deformed specimens from Russia indicate that individuals of Dickinsonia could regenerate after being damaged.
Ecology
Dickinsonia is suggested to have been a mobile marine organism that lived on the seafloor and fed by consuming microbial mats growing on the seabed using structures present on its underside. Dickinsonia-shaped trace fossils, presumed to represent feeding impressions, sometimes found in chains demonstrating this behaviour have been observed. These trace fossils have been assigned to the genus Epibaion. A 2022 study suggested that Dickinsonia temporarily adhered itself to the seafloor by the use of mucus, which may have been an adaptation to living in very shallow water environments.
Discovery
The first species and specimens of this fossil organism were first discovered in the Ediacara Member of the Rawnsley Quartzite, Flinders Ranges in South Australia. Reg Sprigg, the original discoverer of the Ediacaran biota in Australia, described Dickinsonia, naming it after Ben Dickinson, then Director of Mines for South Australia, and head of the government department that employed Sprigg. Additional specimens of Dickinsonia are also known from the Mogilev Formation in the Dniester River Basin of Podolia, Ukraine, the Lyamtsa, Verkhovka, Zimnegory and Yorga Formations in the White Sea area of the Arkhangelsk Region, Chernokamen Formation of the Central Urals, Russia, (these deposits have been dated to 567–550 Myr.), the Dengying Formation in the Yangtze Gorges area, South China. (ca. 551–543 Ma).
Taphonomy
As a rule, Dickinsonia fossils are preserved as negative impressions ("death masks") on the bases of sandstone beds. Such fossils are imprints of the upper sides of the benthic organisms that have been buried under the sand. The imprints formed as a result of cementation of the sand before complete decomposition of the body. The mechanism of cementation is not quite clear; among many possibilities, the process could have arisen from conditions which gave rise to pyrite "death masks" on the decaying body, or perhaps it was due to the carbonate cementation of the sand. The imprints of the bodies of organisms are often strongly compressed, distorted, and sometimes partly extend into the overlying rock. These deformations appear to show attempts by the organisms to escape from the falling sediment.
Rarely, Dickinsonia have been preserved as a cast in massive sandstone lenses, where it occurs together with Pteridinium, Rangea and some others. Large beds containing many hundreds of Dickinsonia (along with many other species) are preserved in situ within Nilpena Ediacara National Park, with park rangers providing on-site guided tours in the cooler months of the year. These specimens are products of events where organisms were first stripped from the sea-floor, transported and deposited within sand flow. In such cases, stretched and ripped Dickinsonia occur. The first such specimen was described as a separate genus and species, Chondroplon bilobatum and later re-identified as Dickinsonia.
Taxonomy
Species
Since 1947, a total of nine species have been described, of which three are currently considered valid:
A claimed specimen of Dickinsonia from India was later determined to be the remains of a beehive.
External relationships
Dickinsonia is classified as part of the group Proarticulata or Dickinsoniomorpha. Proarticulata includes a number of morphologically similar organisms, such Spriggina, Yorgia, Andiva and Cephalonega, which share the same segmented articulation. The affinities of Proarticulata to other organisms, including to other members of the Ediacaran biota, like rangeomorphs, have long been contentious. It has been historically proposed that most Ediacaran organisms were closely related to each other, as part of the grouping "Vendobionta", though recent authors argue that this grouping as a whole is likely to be polyphyletic. Gregory Retallack has proposed that the fossils of Dickinsonia and other Ediacaran biota represent lichens that grew in a terrestrial environment, but this has been broadly rejected by other authors, who argue that a marine environment of deposition better fits available evidence. Other proposal have included giant protists, as proposed by Adolf Seilacher. Most modern research suggest that Dickinsonia and other proarticulatans are likely to be animals, possibly belonging to Eumetazoa. A chemical study of Russian specimens found that they were enriched with cholesterol, which is only produced by animals, supporting an animal affinity, though these results have been questioned by other authors, who consider the association between the cholesterol molecules and the Dickinsonia fossils to not be definitive. Within Animalia, a number of affinities have been proposed, including as stem-eumetazoans forming a clade with rangeomorphs, to Placozoa, and to Cnidaria. A number of researchers have proposed close affinities to Bilateria, based on the bilateral or nearly bilateral organisation of proarticulatans, though proarticulatans are not likely to be a member of the bilaterian crown group.
| Biology and health sciences | Other | Animals |
253840 | https://en.wikipedia.org/wiki/Bus%20station | Bus station | A bus station or a bus interchange is a structure where city buses or intercity buses stop to pick up and drop off passengers. While the term bus depot can also be used to refer to a bus station, it can also refer to a bus garage. A bus station is larger than a bus stop, which is usually simply a place on the roadside, where buses can stop. It may be intended as a terminal station for a number of routes, or as a transfer station where the routes continue.
Bus station platforms may be assigned to fixed bus lines, or variable in combination with a dynamic passenger information system. The latter requires fewer platforms, but does not provide consistent locations for passengers.
Largest bus stations
Kilambakkam bus terminus in Chennai is spread over an area of , making it the largest bus station in the world.
The Woodlands Bus Interchange in Singapore is one of the busiest bus interchanges in the world, handling up to 400,000 passengers daily across 42 bus services. Other Singaporean bus interchanges such as Bedok Bus Interchange, Tampines Bus Interchange and Yishun Bus Interchange handle similar number of passengers daily.
The largest underground bus station in Europe is Kamppi Centre in Helsinki, Finland completed in 2006. The terminal cost 100 million Euro to complete and took 3 years to design and build. Today, the bus terminal, which covers 25,000 square meters, is the busiest bus terminal in Finland. Every day, the terminal has around 700 bus departures, transporting approximately 170,000 passengers.
Preston Bus Station in Preston, England, built in 1969 and later heritage-listed, was described in 2014 as "depending on how you measure it, the largest bus station in the world, the second-biggest in Europe, and the longest in Europe". It was fully refurbished in 2018.
The largest bus terminal in North America is the Port Authority Bus Terminal located in New York City. The terminal is located in Midtown at 625 Eighth Avenue between 40th Street and 42nd Street, one block east of the Lincoln Tunnel and one block west of Times Square. The terminal is the largest in the Western Hemisphere and the busiest in the world by volume of traffic, serving about 8,000 buses and 225,000 people on an average weekday and more than 65 million people a year. It has 223 gates. It operates intercity bus routes all over the United States and some routes with international destinations, mostly in Canada, and mostly operated by Greyhound Lines.
The largest bus terminal in the southern hemisphere is the Tietê Bus Terminal located in São Paulo, Brazil. It is also the 2nd busiest in the world, serving about 90,000 people per weekday in 300 bus lines on its 89 platforms (72 for boarding and 17 for deboarding), with services to over 1,000 cities over the country and South America. The terminal is also linked to Portuguesa-Tietê, an adjacent metro station.
| Technology | Concepts of ground transport | null |
254062 | https://en.wikipedia.org/wiki/Mast%20cell | Mast cell | A mast cell (also known as a mastocyte or a labrocyte) is a resident cell of connective tissue that contains many granules rich in histamine and heparin. Specifically, it is a type of granulocyte derived from the myeloid stem cell that is a part of the immune and neuroimmune systems. Mast cells were discovered by Friedrich von Recklinghausen and later rediscovered by Paul Ehrlich in 1877. Although best known for their role in allergy and anaphylaxis, mast cells play an important protective role as well, being intimately involved in wound healing, angiogenesis, immune tolerance, defense against pathogens, and vascular permeability in brain tumors.
The mast cell is very similar in both appearance and function to the basophil, another type of white blood cell. Although mast cells were once thought to be tissue-resident basophils, it has been shown that the two cells develop from different hematopoietic lineages and thus cannot be the same cells.
Structure
Mast cells are very similar to basophil granulocytes (a class of white blood cells) in blood, in the sense that both are granulated cells that contain histamine and heparin, an anticoagulant. Their nuclei differ in that the basophil nucleus is lobated while the mast cell nucleus is round. The Fc region of immunoglobulin E (IgE) becomes bound to mast cells and basophils, and when IgE's paratopes bind to an antigen, it causes the cells to release histamine and other inflammatory mediators. These similarities have led many to speculate that mast cells are basophils that have "homed in" on tissues. Furthermore, they share a common precursor in bone marrow expressing the CD34 molecule. Basophils leave the bone marrow already mature, whereas the mast cell circulates in an immature form, only maturing once in a tissue site. The site an immature mast cell settles in probably determines its precise characteristics. The first in vitro differentiation and growth of a pure population of mouse mast cells was carried out using conditioned medium derived from concanavalin A-stimulated splenocytes. Later, it was discovered that T cell-derived interleukin 3 was the component present in the conditioned media that was required for mast cell differentiation and growth.
Mast cells in rodents are classically divided into two subtypes: connective tissue-type mast cells and mucosal mast cells. The activities of the latter are dependent on T-cells.
Mast cells are present in most tissues characteristically surrounding blood vessels, nerves and lymphatic vessels, and are especially prominent near the boundaries between the outside world and the internal milieu, such as the skin, mucosa of the lungs, and digestive tract, as well as the mouth, conjunctiva, and nose.
Function
Mast cells play a key role in the inflammatory process. When activated, a mast cell can either selectively release (piecemeal degranulation) or rapidly release (anaphylactic degranulation) "mediators", or compounds that induce inflammation, from storage granules into the local microenvironment. Mast cells can be stimulated to degranulate by allergens through cross-linking with immunoglobulin E receptors (e.g., FcεRI), physical injury through pattern recognition receptors for damage-associated molecular patterns (DAMPs), microbial pathogens through pattern recognition receptors for pathogen-associated molecular patterns (PAMPs), and various compounds through their associated G-protein coupled receptors (e.g., morphine through opioid receptors) or ligand-gated ion channels. Complement proteins can activate membrane receptors on mast cells to exert various functions as well.
Mast cells express a high-affinity receptor (FcεRI) for the Fc region of IgE, the least-abundant member of the antibodies. This receptor is of such high affinity that binding of IgE molecules is in essence irreversible. As a result, mast cells are coated with IgE, which is produced by plasma cells (the antibody-producing cells of the immune system). IgE antibodies are typically specific to one particular antigen.
In allergic reactions, mast cells remain inactive until an allergen binds to IgE already coated upon the cell. Other membrane activation events can either prime mast cells for subsequent degranulation or act in synergy with FcεRI signal transduction. In general, allergens are proteins or polysaccharides. The allergen binds to the antigen-binding sites, which are situated on the variable regions of the IgE molecules bound to the mast cell surface. It appears that binding of two or more IgE molecules (cross-linking) is required to activate the mast cell. The clustering of the intracellular domains of the cell-bound Fc receptors, which are associated with the cross-linked IgE molecules, causes a complex sequence of reactions inside the mast cell that lead to its activation. Although this reaction is most well understood in terms of allergy, it appears to have evolved as a defense system against parasites and bacteria.
Mast cells (MCs) have been shown to release their nuclear DNA and subsequently form mast cell extracellular traps (MCETs) comparable to neutrophil extracellular traps, which are able to entrap and kill various microbes. https://pmc.ncbi.nlm.nih.gov/articles/PMC4947581/
Mast cell mediators
A unique, stimulus-specific set of mast cell mediators is released through degranulation following the activation of cell surface receptors on mast cells. Examples of mediators that are released into the extracellular environment during mast cell degranulation include:
serine proteases, such as tryptase and chymase
histamine (2–5 picograms per mast cell)
serotonin
proteoglycans, mainly heparin (active as anticoagulant) and some chondroitin sulfate proteoglycans
adenosine triphosphate (ATP)
lysosomal enzymes
β-hexosaminidase
β-glucuronidase
arylsulfatases
newly formed lipid mediators (eicosanoids):
thromboxane
prostaglandin D2
leukotriene C4
platelet-activating factor
cytokines
TNF-α
basic fibroblast growth factor
interleukin-4
stem cell factor
chemokines, such as eosinophil chemotactic factor
reactive oxygen species
Histamine dilates post-capillary venules, activates the endothelium, and increases blood vessel permeability. This leads to local edema (swelling), warmth, redness, and the attraction of other inflammatory cells to the site of release. It also depolarizes nerve endings (leading to itching or pain). Cutaneous signs of histamine release are the "flare and wheal"-reaction. The bump and redness immediately following a mosquito bite are a good example of this reaction, which occurs seconds after challenge of the mast cell by an allergen.
The other physiologic activities of mast cells are much less-understood. Several lines of evidence suggest that mast cells may have a fairly fundamental role in innate immunity: They are capable of elaborating a vast array of important cytokines and other inflammatory mediators such as TNF-α; they express multiple "pattern recognition receptors" thought to be involved in recognizing broad classes of pathogens; and mice without mast cells seem to be much more susceptible to a variety of infections.
Mast cell granules carry a variety of bioactive chemicals. These granules have been found to be transferred to adjacent cells of the immune system and neurons in a process of transgranulation via mast cell pseudopodia.
In the nervous system
Unlike other hematopoietic cells of the immune system, mast cells naturally occur in the human brain where they interact with the neuroimmune system. In the brain, mast cells are located in a number of structures that mediate visceral sensory (e.g. pain) or neuroendocrine functions or that are located along the blood–cerebrospinal fluid barrier, including the pituitary stalk, pineal gland, thalamus, and hypothalamus, area postrema, choroid plexus, and in the dural layer of the meninges near meningeal nociceptors. Mast cells serve the same general functions in the body and central nervous system, such as effecting or regulating allergic responses, innate and adaptive immunity, autoimmunity, and inflammation. Across systems, mast cells serve as the main effector cell through which pathogens can affect the gut–brain axis.
In the gut
In the gastrointestinal tract, mucosal mast cells are located in close proximity to sensory nerve fibres, which communicate bidirectionally. When these mast cells initially degranulate, they release mediators (e.g., histamine, tryptase, and serotonin) which activate, sensitize, and upregulate membrane expression of nociceptors (i.e., TRPV1) on visceral afferent neurons via their receptors (respectively, HRH1, HRH2, HRH3, PAR2, 5-HT3); in turn, neurogenic inflammation, visceral hypersensitivity, and intestinal dysmotility (i.e., impaired peristalsis) result. Neuronal activation induces neuropeptide (substance P and calcitonin gene-related peptide) signaling to mast cells where they bind to their associated receptors and trigger degranulation of a distinct set of mediators (β-Hexosaminidase, cytokines, chemokines, PGD2, leukotrienes, and eoxins).
Physiology
Structure of the high-affinity IgE receptor, FcεR1
FcεR1 is a high affinity IgE-receptor that is expressed on the surface of the mast cell. FcεR1 is a tetramer made of one alpha (α) chain, one beta (β) chain, and two identical, disulfide-linked gamma (γ) chains. The binding site for IgE is formed by the extracellular portion of the α chain that contains two domains that are similar to Ig. One transmembrane domain contains an aspartic acid residue, and one contains a short cytoplasmic tail. The β chain contains, a single immunoreceptor tyrosine-based activation motif ITAM, in the cytoplasmic region. Each γ chain has one ITAM on the cytoplasmic region. The signaling cascade from the receptor is initiated when the ITAMs of the β and γ chains are phosphorylated by a tyrosine kinase. This signal is required for the activation of mast cells. Type 2 helper T cells,(Th2) and many other cell types lack the β chain, so signaling is mediated only by the γ chain. This is due to the α chain containing endoplasmic reticulum retention signals that causes the α-chains to remain degraded in the ER. The assembly of the α chain with the co-transfected β and γ chains mask the ER retention and allows the α β γ complex to be exported to the golgi apparatus to the plasma membrane in rats. In humans, only the γ complex is needed to counterbalance the α chain ER retention.
Allergen process
Allergen-mediated FcεR1 cross-linking signals are very similar to the signaling event resulting in antigen binding to lymphocytes. The Lyn tyrosine kinase is associated with the cytoplasmic end of the FcεR1 β chain. The antigen cross-links the FcεR1 molecules, and Lyn tyrosine kinase phosphorylates the ITAMs in the FcεR1 β and γ chain in the cytoplasm. Upon the phosphorylation, the Syk tyrosine kinase gets recruited to the ITAMs located on the γ chains. This causes activation of the Syk tyrosine kinase, causing it to phosphorylate. Syk functions as a signal amplifying kinase activity due to the fact that it targets multiple proteins and causes their activation. This antigen stimulated phosphorylation causes the activation of other proteins in the FcεR1-mediated signaling cascade.
Degranulation and fusion
An important adaptor protein activated by the Syk phosphorylation step is the linker for activation of T cells (LAT). LAT can be modified by phosphorylation to create novel binding sites. Phospholipase C gamma (PLCγ) becomes phosphorylated once bound to LAT, and is then used to catalyze phosphatidylinositol bisphosphate breakdown to yield inositol trisphosphate (IP3) and diacyglycerol (DAG). IP3 elevates calcium levels, and DAG activates protein kinase C (PKC). This is not the only way that PKC is made. The tyrosine kinase FYN phosphorylates Grb2-associated-binding protein 2 (Gab2), which binds to phosphoinositide 3-kinase, which activates PKC. PKC leads to the activation of myosin light-chain phosphorylation granule movements, which disassembles the actin–myosin complexes to allow granules to come into contact with the plasma membrane. The mast cell granule can now fuse with the plasma membrane. Soluble N-ethylmaleimide sensitive fusion attachment protein receptor SNARE complex mediates this process. Different SNARE proteins interact to form different complexes that catalyze fusion. Rab3 guanosine triphosphatases and Rab-associated kinases and phosphatases regulate granule membrane fusion in resting mast cells.
MRGPRX2 mast cell receptor
Human mast-cell-specific G-protein-coupled receptor MRGPRX2 plays a key role in the recognition of pathogen associated molecular patterns (PAMPs) and initiating an antibacterial response. MRGPRX2 is able to bind to competence stimulating peptide (CSP) 1 - a quorum sensing molecule (QSM) produced by Gram-positive bacteria. This leads to signal transduction to a G protein and activation of the mast cell. Mast cell activation induces the release of antibacterial mediators including ROS, TNF-α and PRGD2 which institute the recruitment of other immune cells to inhibit bacterial growth and biofilm formation.
The MRGPRX2 receptor is a possible therapeutic target and can be pharmacologically activated using the agonist compound 48/80 to control bacterial infection. It is also hypothesised that other QSMs and even Gram-negative bacterial signals can activate this receptor. This might particularly be the case during Bartonella chronic infections where it appears clearly in human symptomatology that these patients all have a mast cell activation syndrome due to the presence of a not yet defined quorum sensing molecule (basal histamine itself?). Those patients are prone to food intolerance driven by another less specific path than the IgE receptor path: certainly the MRGPRX2 route. These patients also show cyclical skin pathergy and dermographism, every time the bacteria exits its hidden intracellular location.
Enzymes
Clinical significance
Parasitic infections
Mast cells are activated in response to infection by pathogenic parasites, such as certain helminths and protozoa, through IgE signaling. Various species known to be affected include T.spiralis, S.ratti, and S.venezuelensis. This is accomplished via Type 2 cell-mediated effector immunity, which is characterized by signaling from IL-4, IL-5, and IL-13. It is the same immune response that is responsible for allergic inflammation more generally, and includes effectors beyond mast cells. In this response, mast cells are known to release significant quantities of IL-4 and IL-13 along with mast cell chymase 1 (CMA1), which is considered to help expel some worms by increasing vascular permeability.
Mast cell activation disorders
Mast cell activation disorders (MCAD) are a spectrum of immune disorders that are unrelated to pathogenic infection and involve similar symptoms that arise from secreted mast cell intermediates, but differ slightly in their pathophysiology, treatment approach, and distinguishing symptoms. The classification of mast cell activation disorders was laid out in 2010.
Allergic disease
Allergies are mediated through IgE signaling which triggers mast cell degranulation. Recently, IgE-independent "pseudo-allergic" reactions are thought to also be mediated via the MRGPRX2 receptor activation of mast cells (e.g. drugs such as muscle relaxants, opioids, Icatibant and fluoroquinolones).
Many forms of cutaneous and mucosal allergy are mediated in large part by mast cells; they play a central role in asthma, eczema, itch (from various causes), allergic rhinitis and allergic conjunctivitis. Antihistamine drugs act by blocking histamine action on nerve endings. Cromoglicate-based drugs (sodium cromoglicate, nedocromil) block a calcium channel essential for mast cell degranulation, stabilizing the cell and preventing release of histamine and related mediators. Leukotriene antagonists (such as montelukast and zafirlukast) block the action of leukotriene mediators and are being used increasingly in allergic diseases.
Calcium triggers the secretion of histamine from mast cells after previous exposure to sodium fluoride. The secretory process can be divided into a fluoride-activation step and a calcium-induced secretory step. It was observed that the fluoride-activation step is accompanied by an elevation of cyclic adenosine monophosphate (cAMP) levels within the cells. The attained high levels of cAMP persist during histamine release. It was further found that catecholamines do not markedly alter the fluoride-induced histamine release. It was also confirmed that the second, but not the first, step in sodium fluoride-induced histamine secretion is inhibited by theophylline. Vasodilation and increased permeability of capillaries are a result of both H1 and H2 receptor types.
Stimulation of histamine activates a histamine (H2)-sensitive adenylate cyclase of oxyntic cells, and there is a rapid increase in cellular [cAMP] that is involved in activation of H+ transport and other associated changes of oxyntic cells.
Anaphylaxis
In anaphylaxis (a severe systemic reaction to allergens, such as nuts, bee stings, or drugs), the body-wide degranulation of mast cells leads to vasodilation and, if severe, symptoms of life-threatening shock. Products released from these granules include histamine, serotonin, heparin, chondroitin sulphate, tryptase, chymase, carboxypeptidase, and TNF-α. These can vary in their quantities and proportions between individuals, which may explain some of the differences in symptoms seen across patients.
Histamine is a vasodilatory substance released during anaphylaxis.
Autoimmunity
Mast cells may be implicated in the pathology associated with autoimmune, inflammatory disorders of the joints. They have been shown to be involved in the recruitment of inflammatory cells to the joints (e.g., rheumatoid arthritis) and skin (e.g., bullous pemphigoid), and this activity is dependent on antibodies and complement components.
Mastocytosis and clonal disorders
Mastocytosis is a rare clonal mast cell disorder involving the presence of too many mast cells (mastocytes) and CD34+ mast cell precursors. Mutations in c-Kit are associated with mastocytosis. More specifically, the majority (>80%) of patients with mastocytosis have a mutation at codon 816 in the kinase domain of KIT, known as the KIT D816V mutation. This mutation, as well as expression of either CD2 or CD25 (confirmed by immunostaining or flow cytometry), are characteristic of primary clonal/monoclonal mast cell activation syndrome (CMCAS/MMAS). The most commonly affected organs in mastocytosis are the skin and bone marrow.
Monoclonal disorders
Neoplastic disorders
Mastocytomas, or mast cell tumors, can secrete excessive quantities of degranulation products. They are often seen in dogs and cats. Other neoplastic disorders associated with mast cells include mast cell sarcoma and mast cell leukemia.
Mast cell activation syndrome
Mast cell activation syndrome (MCAS) is an idiopathic immune disorder that involves recurrent and excessive mast cell degranulation and which produces symptoms that are similar to other mast cell activation disorders. The syndrome is diagnosed based upon four sets of criteria involving treatment response, symptoms, a differential diagnosis, and biomarkers of mast cell degranulation.
History
Mast cells were first described by Paul Ehrlich in his 1878 doctoral thesis on the basis of their unique staining characteristics and large granules. These granules also led him to the incorrect belief that they existed to nourish the surrounding tissue, so he named them Mastzellen (, as of animals). They are now considered to be part of the immune system.
Research
Autism
Research into an immunological contribution to autism suggests that autism spectrum disorder (ASD) children may present with "allergic-like" problems in the absence of elevated serum IgE and chronic urticaria, suggesting non-allergic mast cell activation in response to environmental and stress triggers. This mast cell activation could contribute to brain inflammation and neurodevelopmental problems.
Histological staining
Toluidine blue: one of the most common stains for acid mucopolysaccharides and glycoaminoglycans, components of mast cells granules.
Bismarck brown: stains mast cell granules brown.
Surface markers: cell surface markers of mast cells were discussed in detail by Heneberg, claiming that mast cells may be inadvertently included in the stem or progenitor cell isolates, since part of them is positive for the CD34 antigen. The classical mast cell markers include the high-affinity IgE receptor, CD117 (c-Kit), and CD203c (for most of the mast cell populations). Expression of some molecules may change in course of the mast cell activation.
Heterogeneity
Mast cell heterogeneity significantly impacts the efficacy of mast cell stabilizing drugs disodium cromoglycate and ketotifen in preventing mediator release. In experiments, ketotifen inhibits mast cells from lung and tonsillar tissues when stimulated via an IgE-dependent histamine release mechanism, while disodium cromoglycate is less effective but still inhibited these mast cells. However, both agents fail to inhibit mediator release from skin mast cells, indicating that these cells are unresponsive to these stabilizers. Such differences in mast cell activation suggests the existence of different mast cell types across various tissuesa topic of ongoing research.
Other organisms
Mast cells and enterochromaffin cells are the source of most serotonin in the stomach in rodents.
| Biology and health sciences | Circulatory system | Biology |
254127 | https://en.wikipedia.org/wiki/Double%20bond | Double bond | In chemistry, a double bond is a covalent bond between two atoms involving four bonding electrons as opposed to two in a single bond. Double bonds occur most commonly between two carbon atoms, for example in alkenes. Many double bonds exist between two different elements: for example, in a carbonyl group between a carbon atom and an oxygen atom. Other common double bonds are found in azo compounds (N=N), imines (C=N), and sulfoxides (S=O). In a skeletal formula, a double bond is drawn as two parallel lines (=) between the two connected atoms; typographically, the equals sign is used for this. Double bonds were introduced in chemical notation by Russian chemist Alexander Butlerov.
Double bonds involving carbon are stronger and shorter than single bonds. The bond order is two. Double bonds are also electron-rich, which makes them potentially more reactive in the presence of a strong electron acceptor (as in addition reactions of the halogens).
Double bonds in alkenes
The type of bonding can be explained in terms of orbital hybridisation. In ethylene each carbon atom has three sp2 orbitals and one p-orbital. The three sp2 orbitals lie in a plane with ~120° angles. The p-orbital is perpendicular to this plane. When the carbon atoms approach each other, two of the sp2 orbitals overlap to form a sigma bond. At the same time, the two p-orbitals approach (again in the same plane) and together they form a pi bond. For maximum overlap, the p-orbitals have to remain parallel, and, therefore, rotation around the central bond is not possible. This property gives rise to cis-trans isomerism. Double bonds are shorter than single bonds because p-orbital overlap is maximized.
With 133 pm, the ethylene C=C bond length is shorter than the C−C length in ethane with 154 pm. The double bond is also stronger, 636 kJ mol−1 versus 368 kJ mol−1 but not twice as much as the pi-bond is weaker than the sigma bond due to less effective pi-overlap.
In an alternative representation, the double bond results from two overlapping sp3 orbitals as in a bent bond.
Variations
In molecules with alternating double bonds and single bonds, p-orbital overlap can exist over multiple atoms in a chain, giving rise to a conjugated system. Conjugation can be found in systems such as dienes and enones. In cyclic molecules, conjugation can lead to aromaticity. In cumulenes, two double bonds are adjacent.
Double bonds are common for period 2 elements carbon, nitrogen, and oxygen, and less common with elements of higher periods. Metals, too, can engage in multiple bonding in a metal ligand multiple bond.
Group 14 alkene homologs
Double bonded compounds, alkene homologs, R2E=ER2 are now known for all of the heavier group 14 elements. Unlike the alkenes these compounds are not planar but adopt twisted and/or trans bent structures. These effects become more pronounced for the heavier elements. The distannene (Me3Si)2CHSn=SnCH(SiMe3)2 has a tin-tin bond length just a little shorter than a single bond, a trans bent structure with pyramidal coordination at each tin atom, and readily dissociates in solution to form (Me3Si)2CHSn: (stannanediyl, a carbene analog). The bonding comprises two weak donor acceptor bonds, the lone pair on each tin atom overlapping with the empty p orbital on the other. In contrast, in disilenes each silicon atom has planar coordination but the substituents are twisted so that the molecule as a whole is not planar. In diplumbenes the Pb=Pb bond length can be longer than that of many corresponding single bonds Plumbenes and stannenes generally dissociate in solution into monomers with bond enthalpies that are just a fraction of the corresponding single bonds. Some double bonds plumbenes and stannenes are similar in strength to hydrogen bonds. The Carter-Goddard-Malrieu-Trinquier model can be used to predict the nature of the bonding.
Types of double bonds between atoms
| Physical sciences | Bonding | Chemistry |
254443 | https://en.wikipedia.org/wiki/Convergent%20boundary | Convergent boundary | A convergent boundary (also known as a destructive boundary) is an area on Earth where two or more lithospheric plates collide. One plate eventually slides beneath the other, a process known as subduction. The subduction zone can be defined by a plane where many earthquakes occur, called the Wadati–Benioff zone. These collisions happen on scales of millions to tens of millions of years and can lead to volcanism, earthquakes, orogenesis, destruction of lithosphere, and deformation. Convergent boundaries occur between oceanic-oceanic lithosphere, oceanic-continental lithosphere, and continental-continental lithosphere. The geologic features related to convergent boundaries vary depending on crust types.
Plate tectonics is driven by convection cells in the mantle. Convection cells are the result of heat generated by the radioactive decay of elements in the mantle escaping to the surface and the return of cool materials from the surface to the mantle. These convection cells bring hot mantle material to the surface along spreading centers creating new crust. As this new crust is pushed away from the spreading center by the formation of newer crust, it cools, thins, and becomes denser. Subduction begins when this dense crust converges with a less dense crust. The force of gravity helps drive the subducting slab into the mantle. As the relatively cool subducting slab sinks deeper into the mantle, it is heated, causing hydrous minerals to break down. This releases water into the hotter asthenosphere, which leads to partial melting of the asthenosphere and volcanism. Both dehydration and partial melting occur along the isotherm, generally at depths of .
Some lithospheric plates consist of both continental and oceanic lithosphere. In some instances, initial convergence with another plate will destroy oceanic lithosphere, leading to convergence of two continental plates. Neither continental plate will subduct. It is likely that the plate may break along the boundary of continental and oceanic crust. Seismic tomography reveals pieces of lithosphere that have broken off during convergence.
Subduction zones
Subduction zones are areas where one lithospheric plate slides beneath another at a convergent boundary due to lithospheric density differences. These plates dip at an average of 45° but can vary. Subduction zones are often marked by an abundance of earthquakes, the result of internal deformation of the plate, convergence with the opposing plate, and bending at the oceanic trench. Earthquakes have been detected to a depth of 670 km (416 mi). The relatively cold and dense subducting plates are pulled into the mantle and help drive mantle convection.
Oceanic – oceanic convergence
In collisions between two oceanic plates, the cooler, denser oceanic lithosphere sinks beneath the warmer, less dense oceanic lithosphere. As the slab sinks deeper into the mantle, it releases water from dehydration of hydrous minerals in the oceanic crust. This water reduces the melting temperature of rocks in the asthenosphere and causes partial melting. Partial melt will travel up through the asthenosphere, eventually, reach the surface, and form volcanic island arcs.
Continental – oceanic convergence
When oceanic lithosphere and continental lithosphere collide, the dense oceanic lithosphere subducts beneath the less dense continental lithosphere. An accretionary wedge forms on the continental crust as deep-sea sediments and oceanic crust are scraped from the oceanic plate. Volcanic arcs form on continental lithosphere as the result of partial melting due to dehydration of the hydrous minerals of the subducting slab.
Continental – continental convergence
Some lithospheric plates consist of both continental and oceanic crust. Subduction initiates as oceanic lithosphere slides beneath continental crust. As the oceanic lithosphere subducts to greater depths, the attached continental crust is pulled closer to the subduction zone. Once the continental lithosphere reaches the subduction zone, subduction processes are altered, since continental lithosphere is more buoyant and resists subduction beneath other continental lithosphere. A small portion of the continental crust may be subducted until the slab breaks, allowing the oceanic lithosphere to continue subducting, hot asthenosphere to rise and fill the void, and the continental lithosphere to rebound. Evidence of this continental rebound includes ultrahigh pressure metamorphic rocks, which form at depths of , that are exposed at the surface. Seismic records have been used to map the torn slabs beneath the Caucasus continental – continental convergence zone, and seismic tomography has mapped detached slabs beneath the Tethyan suture zone (the Alps – Zagros – Himalaya mountain belt).
Volcanism and volcanic arcs
The oceanic crust contains hydrated minerals such as the amphibole and mica groups. During subduction, oceanic lithosphere is heated and metamorphosed, causing breakdown of these hydrous minerals, which releases water into the asthenosphere. The release of water into the asthenosphere leads to partial melting. Partial melting allows the rise of more buoyant, hot material and can lead to volcanism at the surface and emplacement of plutons in the subsurface. These processes which generate magma are not entirely understood.
Where these magmas reach the surface they create volcanic arcs. Volcanic arcs can form as island arc chains or as arcs on continental crust. Three magma series of volcanic rocks are found in association with arcs. The chemically reduced tholeiitic magma series is most characteristic of oceanic volcanic arcs, though this is also found in continental volcanic arcs above rapid subduction (>7 cm/year). This series is relatively low in potassium. The more oxidized calc-alkaline series, which is moderately enriched in potassium and incompatible elements, is characteristic of continental volcanic arcs. The alkaline magma series (highly enriched in potassium) is sometimes present in the deeper continental interior. The shoshonite series, which is extremely high in potassium, is rare but sometimes is found in volcanic arcs. The andesite member of each series is typically most abundant, and the transition from basaltic volcanism of the deep Pacific basin to andesitic volcanism in the surrounding volcanic arcs has been called the andesite line.
Back-arc basins
Back-arc basins form behind a volcanic arc and are associated with extensional tectonics and high heat flow, often being home to seafloor spreading centers. These spreading centers are like mid-ocean ridges, though the magma composition of back-arc basins is generally more varied and contains a higher water content than mid-ocean ridge magmas. Back-arc basins are often characterized by thin, hot lithosphere. Opening of back-arc basins may arise from movement of hot asthenosphere into lithosphere, causing extension.
Oceanic trenches
Oceanic trenches are narrow topographic lows that mark convergent boundaries or subduction zones. Oceanic trenches average wide and can be several thousand kilometers long. Oceanic trenches form as a result of bending of the subducting slab. Depth of oceanic trenches seems to be controlled by age of the oceanic lithosphere being subducted. Sediment fill in oceanic trenches varies and generally depends on abundance of sediment input from surrounding areas. An oceanic trench, the Mariana Trench, is the deepest point of the ocean at a depth of approximately .
Earthquakes and tsunamis
Earthquakes are common along convergent boundaries. A region of high earthquake activity, the Wadati–Benioff zone, generally dips 45° and marks the subducting plate. Earthquakes will occur to a depth of along the Wadati-Benioff margin.
Both compressional and extensional forces act along convergent boundaries. On the inner walls of trenches, compressional faulting or reverse faulting occurs due to the relative motion of the two plates. Reverse faulting scrapes off ocean sediment and leads to the formation of an accretionary wedge. Reverse faulting can lead to megathrust earthquakes. Tensional or normal faulting occurs on the outer wall of the trench, likely due to bending of the downgoing slab.
A megathrust earthquake can produce sudden vertical displacement of a large area of ocean floor. This in turn generates a tsunami.
Some of the deadliest natural disasters have occurred due to convergent boundary processes. The 2004 Indian Ocean earthquake and tsunami was triggered by a megathrust earthquake along the convergent boundary of the Indian plate and Burma microplate and killed over 200,000 people. The 2011 tsunami off the coast of Japan, which caused 16,000 deaths and did US$360 billion in damage, was caused by a magnitude 9 megathrust earthquake along the convergent boundary of the Eurasian plate and Pacific plate.
Accretionary wedge
Accretionary wedges (also called accretionary prisms) form as sediment is scraped from the subducting lithosphere and emplaced against the overriding lithosphere. These sediments include igneous crust, turbidite sediments, and pelagic sediments. Imbricate thrust faulting along a basal decollement surface occurs in accretionary wedges as forces continue to compress and fault these newly added sediments. The continued faulting of the accretionary wedge leads to overall thickening of the wedge. Seafloor topography plays some role in accretion, especially emplacement of igneous crust.
Examples
The collision between the Eurasian plate and the Indian plate that is forming the Himalayas.
The collision between the Australian plate and the Pacific plate that formed the Southern Alps in New Zealand
Subduction of the northern part of the Pacific plate and the NW North American plate that is forming the Aleutian Islands.
Subduction of the Nazca plate beneath the South American plate to form the Andes.
Subduction of the Pacific plate beneath the Australian plate and Tonga plate, forming the complex New Zealand to New Guinea subduction/transform boundaries.
Collision of the Eurasian plate and the African plate formed the Pontic Mountains in Turkey.
Subduction of the Pacific plate beneath the Mariana plate formed the Mariana Trench.
Subduction of the Juan de Fuca plate beneath the North American plate to form the Cascade Range.
| Physical sciences | Tectonics | Earth science |
254452 | https://en.wikipedia.org/wiki/Planetary%20differentiation | Planetary differentiation | In planetary science, planetary differentiation is the process by which the chemical elements of a planetary body accumulate in different areas of that body, due to their physical or chemical behavior (e.g. density and chemical affinities). The process of planetary differentiation is mediated by partial melting with heat from radioactive isotope decay and planetary accretion. Planetary differentiation has occurred on planets, dwarf planets, the asteroid 4 Vesta, and natural satellites (such as the Moon).
Physical differentiation
Gravitational separation
High-density materials tend to sink through lighter materials. This tendency is affected by the relative structural strengths, but such strength is reduced at temperatures where both materials are plastic or molten. Iron, the most common element that is likely to form a very dense molten metal phase, tends to congregate towards planetary interiors. With it, many siderophile elements (i.e. materials that readily alloy with iron) also travel downward. However, not all heavy elements make this transition as some chalcophilic heavy elements bind into low-density silicate and oxide compounds, which differentiate in the opposite direction.
The main compositionally differentiated zones in the solid Earth are the very dense iron-rich metallic core, the less dense magnesium-silicate-rich mantle and the relatively thin, light crust composed mainly of silicates of aluminium, sodium, calcium and potassium. Even lighter still are the watery liquid hydrosphere and the gaseous, nitrogen-rich atmosphere.
Lighter materials tend to rise through material with a higher density. A light mineral such as plagioclase would rise. They may take on dome-shaped forms called diapirs when doing so. On Earth, salt domes are salt diapirs in the crust which rise through surrounding rock. Diapirs of molten low-density silicate rocks such as granite are abundant in the Earth's upper crust. The hydrated, low-density serpentinite formed by alteration of mantle material at subduction zones can also rise to the surface as diapirs. Other materials do likewise: a low-temperature, near-surface example is provided by mud volcanoes.
Chemical differentiation
Although bulk materials differentiate outward or inward according to their density, the elements that are chemically bound in them fractionate according to their chemical affinities, "carried along" by more abundant materials with which they are associated. For instance, although the rare element uranium is very dense as a pure element, it is chemically more compatible as a trace element in the Earth's light, silicate-rich crust than in the dense metallic core.
Heating
When the Sun ignited in the solar nebula, hydrogen, helium and other volatile materials were evaporated in the region around it. The solar wind and radiation pressure forced these low-density materials away from the Sun. Rocks, and the elements comprising them, were stripped of their early atmospheres, but themselves remained, to accumulate into protoplanets.
Protoplanets had higher concentrations of radioactive elements early in their history, the quantity of which has reduced over time due to radioactive decay. For example, the hafnium-tungsten system demonstrates the decay of two unstable isotopes and possibly forms a timeline for accretion. Heating due to radioactivity, impacts, and gravitational pressure melted parts of protoplanets as they grew toward being planets. In melted zones, it was possible for denser materials to sink towards the center, while lighter materials rose to the surface. The compositions of some meteorites (achondrites) show that differentiation also took place in some asteroids (e.g. Vesta), that are parental bodies for meteoroids. The short-lived radioactive isotope 26Al was probably the main source of heat.
When protoplanets accrete more material, the energy of impact causes local heating. In addition to this temporary heating, the gravitational force in a sufficiently large body creates pressures and temperatures which are sufficient to melt some of the materials. This allows chemical reactions and density differences to mix and separate materials, and soft materials to spread out over the surface. Another external heat source is tidal heating.
On Earth, a large piece of molten iron is sufficiently denser than continental crust material to force its way down through the crust to the mantle.
In the outer Solar System, a similar process may take place but with lighter materials: they may be hydrocarbons such as methane, water as liquid or ice, or frozen carbon dioxide.
Fractional melting and crystallization
Magma in the Earth is produced by partial melting of a source rock, ultimately in the mantle. The melt extracts a large portion of the "incompatible elements" from its source that are not stable in the major minerals. When magma rises above a certain depth the dissolved minerals start to crystallize at particular pressures and temperatures. The resulting solids remove various elements from the melt, and melt is thus depleted of those elements. Study of trace elements in igneous rocks thus gives us information about what source melted by how much to produce a magma, and which minerals have been lost from the melt.
Thermal diffusion
When material is unevenly heated, lighter material migrates toward hotter zones and heavier material migrates towards colder areas, which is known as thermophoresis, thermomigration, or the Soret effect. This process can affect differentiation in magma chambers. A deeper understanding of this process can be drawn back to a study done on the Hawaiian lava lakes. The drilling of these lakes led to the discovery of crystals formed within magma fronts. The magma containing concentrations of these large crystals or phenocrysts demonstrated differentiation through the chemical melt of crystals.
Lunar KREEP
On the Moon, a distinctive basaltic material has been found that is high in "incompatible elements" such as potassium, rare earth elements, and phosphorus and is often referred to by the abbreviation KREEP. It is also high in uranium and thorium. These elements are excluded from the major minerals of the lunar crust which crystallized out from its primeval magma ocean, and the KREEP basalt may have been trapped as a chemical differentiate between the crust and the mantle, with occasional eruptions to the surface.
Differentiation through collision
Earth's Moon probably formed out of material splashed into orbit by the impact of a large body into the early Earth. Differentiation on Earth had probably already separated many lighter materials toward the surface, so that the impact removed a disproportionate amount of silicate material from Earth, and left the majority of the dense metal behind. The Moon's density is substantially less than that of Earth, due to its lack of a large iron core. On Earth, physical and chemical differentiation processes led to a crustal density of approximately 2700 kg/m3 compared to the 3400 kg/m3 density of the compositionally different mantle just below, and the average density of the planet as a whole is 5515 kg/m3.
Core formation mechanisms
Core formation utilizes several mechanisms in order to control the movement of metals into the interior of a planetary body. Examples include percolation, diking, diapirism, and the direct delivery of impacts are mechanisms involved in this process. The metal to silicate density difference causes percolation or the movement of a metal downward. Diking is a process in which a new rock formation forms within a fracture of a pre-existing rock body. For example, if minerals are cold and brittle, transport can occur through fluid cracks. A sufficient amount of pressure must be met for a metal to successfully travel through the fracture toughness of the surrounding material. The size of the metal intruding and the viscosity of the surrounding material determines the rate of the sinking process. The direct delivery of impacts occurs when an impactor of similar proportions strikes the target planetary body. During the impact, there is an exchange of pre-existing cores containing metallic material.
The planetary differentiation event is said to have most likely happened after the accretion process of either the asteroid or a planetary body. Terrestrial bodies and iron meteorites consist of Fe-Ni alloys. The Earth's core is primarily composed Fe-Ni alloys. Based on the studies of short lived radionuclides, the results suggest that core formation process occurred during an early stage of the solar system. Siderophile elements such as, sulfur, nickel, and cobalt can dissolve in molten iron; these elements help the differentiation of iron alloys.
The first stages of accretion set up the groundwork for core formation. First, terrestrial planetary bodies enter a neighboring planet's orbit. Next, a collision would take place and the terrestrial body could either grow or shrink. However, in most cases, accretion requires multiple collisions of similar sized objects to have a major difference in the planet's growth. Feeding zones and hit and run events are characteristics that can result after accretion.
| Physical sciences | Planetary science | Astronomy |
254510 | https://en.wikipedia.org/wiki/Galvanic%20cell | Galvanic cell | A galvanic cell or voltaic cell, named after the scientists Luigi Galvani and Alessandro Volta, respectively, is an electrochemical cell in which an electric current is generated from spontaneous oxidation–reduction reactions. An example of a galvanic cell consists of two different metals, each immersed in separate beakers containing their respective metal ions in solution that are connected by a salt bridge or separated by a porous membrane.
Volta was the inventor of the voltaic pile, the first electrical battery. Common usage of the word battery has evolved to include a single Galvanic cell, but the first batteries had many Galvanic cells.
History
In 1780, Luigi Galvani discovered that when two different metals (e.g., copper and zinc) are in contact and then both are touched at the same time to two different parts of a muscle of a frog leg, to close the circuit, the frog's leg contracts. He called this "animal electricity". The frog's leg, as well as being a detector of electrical current, was also the electrolyte (to use the language of modern chemistry).
A year after Galvani published his work (1790), Alessandro Volta showed that the frog was not necessary, using instead a force-based detector and brine-soaked paper (as electrolyte). (Earlier Volta had established the law of capacitance with force-based detectors). In 1799 Volta invented the voltaic pile, which is a stack of galvanic cells each consisting of a metal disk, an electrolyte layer, and a disk of a different metal. He built it entirely out of non-biological material to challenge Galvani's (and the later experimenter Leopoldo Nobili)'s animal electricity theory in favor of his own metal-metal contact electricity theory. Carlo Matteucci in his turn constructed a battery entirely out of biological material in answer to Volta. Volta's contact electricity view characterized each electrode with a number that we would now call the work function of the electrode. This view ignored the chemical reactions at the electrode-electrolyte interfaces, which include H2 formation on the more noble metal in Volta's pile.
Although Volta did not understand the operation of the battery or the galvanic cell, these discoveries paved the way for electrical batteries; Volta's cell was named an IEEE Milestone in 1999.
Some forty years later, Faraday (see Faraday's laws of electrolysis) showed that the galvanic cell—now often called a voltaic cell—was chemical in nature. Faraday introduced new terminology to the language of chemistry: electrode (cathode and anode), electrolyte, and ion (cation and anion). Thus Galvani incorrectly thought the source of electricity (or source of electromotive force (emf), or seat of emf) was in the animal, Volta incorrectly thought it was in the physical properties of the isolated electrodes, but Faraday correctly identified the source of emf as the chemical reactions at the two electrode-electrolyte interfaces. The authoritative work on the intellectual history of the voltaic cell remains that by Ostwald.
It was suggested by Wilhelm König in 1940 that the object known as the Baghdad battery might represent galvanic cell technology from ancient Parthia. Replicas filled with citric acid or grape juice have been shown to produce a voltage. However, it is far from certain that this was its purpose—other scholars have pointed out that it is very similar to vessels known to have been used for storing parchment scrolls.
Principles
Galvanic cells are extensions of spontaneous redox reactions, but have been merely designed to harness the energy produced from said reaction. For example, when one immerses a strip of zinc metal (Zn) in an aqueous solution of copper sulfate (CuSO), dark-colored solid deposits will collect on the surface of the zinc metal and the blue color characteristic of the Cu ion disappears from the solution. The depositions on the surface of the zinc metal consist of copper metal, and the solution now contains zinc ions. This reaction is represented by
Zn   +   Cu     Zn   +   Cu
In this redox reaction, Zn is oxidized to Zn and Cu is reduced to Cu. When electrons are transferred directly from Zn to Cu, the enthalpy of reaction is lost to the surroundings as heat. However, the same reaction can be carried out in a galvanic cell, allowing some of the chemical energy released to be converted into electrical energy. In its simplest form, a half-cell consists of a solid metal (called an electrode) that is submerged in a solution; the solution contains cations (+) of the electrode metal and anions (−) to balance the charge of the cations. The full cell consists of two half-cells, usually connected by a semi-permeable membrane or by a salt bridge that prevents the ions of the more noble metal from plating out at the other electrode.
A specific example is the Daniell cell (see figure), with a zinc (Zn) half-cell containing a solution of ZnSO4 (zinc sulfate) and a copper (Cu) half-cell containing a solution of CuSO4 (copper sulfate). A salt bridge is used here to complete the electric circuit.
If an external electrical conductor connects the copper and zinc electrodes, zinc from the zinc electrode dissolves into the solution as Zn++ ions (oxidation), releasing electrons that enter the external conductor. To compensate for the increased zinc ion concentration, via the salt bridge zinc ions (cations) leave and sulfate ions (anions) enter the zinc half-cell. In the copper half-cell, the copper ions plate onto the copper electrode (reduction), taking up electrons that leave the external conductor. Since the Cu++ ions (cations) plate onto the copper electrode, the latter is called the cathode. Correspondingly the zinc electrode is the anode. The electrochemical reaction is
Zn_\mathsf{(s)}\ +\ Cu^{++}_\mathsf{(aq)}\ ->\ Zn^{++}_\mathsf{(aq)}\ +\ Cu_\mathsf{(s)}
This is the same reaction as given in the previous example. In addition, electrons flow through the external conductor, which is the primary application of the galvanic cell.
As discussed under cell voltage, the electromotive force of the cell is the difference of the half-cell potentials, a measure of the relative ease of dissolution of the two electrodes into the electrolyte. The emf depends on both the electrodes and on the electrolyte, an indication that the emf is chemical in nature.
Half reactions and conventions
A half-cell contains a metal in two oxidation states. Inside an isolated half-cell, there is an oxidation-reduction (redox) reaction that is in chemical equilibrium, a condition written symbolically as follows (here, "M" represents a metal cation, an atom that has a charge imbalance due to the loss of electrons):
M + e− M
A galvanic cell consists of two half-cells, such that the electrode of one half-cell is composed of metal A, and the electrode of the other half-cell is composed of metal B; the redox reactions for the two separate half-cells are thus:
A + + e−     A
B +   +   e−     B
The overall balanced reaction is:
A   +   B +     B   +   A +
In other words, the metal atoms of one half-cell are oxidized while the metal cations of the other half-cell are reduced. By separating the metals in two half-cells, their reaction can be controlled in a way that forces transfer of electrons through the external circuit where they can do useful work.
The electrodes are connected with a metal wire in order to conduct the electrons that participate in the reaction.
In one half-cell, dissolved metal B cations combine with the free electrons that are available at the interface between the solution and the metal B electrode; these cations are thereby neutralized, causing them to precipitate from solution as deposits on the metal B electrode, a process known as plating.
This reduction reaction causes the free electrons throughout the metal B electrode, the wire, and the metal A electrode to be pulled into the metal B electrode. Consequently, electrons are wrestled away from some of the atoms of the metal A electrode, as though the metal B cations were reacting directly with them; those metal A atoms become cations that dissolve into the surrounding solution.
As this reaction continues, the half-cell with the metal A electrode develops a positively charged solution (because the metal A cations dissolve into it), while the other half-cell develops a negatively charged solution (because the metal B cations precipitate out of it, leaving behind the anions); unabated, this imbalance in charge would stop the reaction. The solutions of the half-cells are connected by a salt bridge or a porous plate that allows ions to pass from one solution to the other, which balances the charges of the solutions and allows the reaction to continue.
By definition:
The anode is the electrode where oxidation (loss of electrons) takes place (metal A electrode); in a galvanic cell, it is the negative electrode, because when oxidation occurs, electrons are left behind on the electrode. These electrons then flow through the external circuit to the cathode (positive electrode) (while in electrolysis, an electric current drives electron flow in the opposite direction and the anode is the positive electrode).
The cathode is the electrode where reduction (gain of electrons) takes place (metal B electrode); in a galvanic cell, it is the positive electrode, as ions get reduced by taking up electrons from the electrode and plate out (while in electrolysis, the cathode is the negative terminal and attracts positive ions from the solution). In both cases, the statement 'the cathode attracts cations' is true.
By their nature, galvanic cells produce direct current.
The Weston cell has an anode composed of cadmium mercury amalgam, and a cathode composed of pure mercury. The electrolyte is a (saturated) solution of cadmium sulfate. The depolarizer is a paste of mercurous sulfate. When the electrolyte solution is saturated, the voltage of the cell is very reproducible; hence, in 1911, it was adopted as an international standard for voltage.
In the strictest sense, a battery is a set of two or more galvanic cells that are connected in series to form a single source of voltage.
For instance, a typical 12 V lead–acid battery has six galvanic cells connected in series, with the anodes composed of lead and cathodes composed of lead dioxide, both immersed in sulfuric acid.
Large central office battery rooms – in a telephone exchange to provide power for subscribers' land-line telephones, for instance – may have many cells, connected both in series and parallel: Individual cells are connected in series as a battery of cells with some standard voltage (), and banks of such serial batteries, themselves connected in parallel, to provide adequate amperage to supply a typical peak demand for telephone connections.
Cell voltage
The voltage (electromotive force ) produced by a galvanic cell can be estimated from the standard Gibbs free energy change in the electrochemical reaction according to:
where is the number of electrons transferred in the balanced half reactions, and is Faraday's constant. However, it can be determined more conveniently by the use of a standard potential table for the two half cells involved. The first step is to identify the two metals and their ions reacting in the cell. Then one looks up the standard electrode potential,  o, in volts, for each of the two half reactions. The standard potential of the cell is equal to the more positive  o value minus the more negative  o value.
For example, in the figure above the solutions are CuSO4 and ZnSO4. Each solution has a corresponding metal strip in it, and a salt bridge or porous disk connecting the two solutions and allowing ions to flow freely between the copper and zinc solutions. To calculate the standard potential one looks up copper and zinc's half reactions and finds:
Cu++   + 2 Cu :  o = +0.34 V
Zn++   + 2 Zn :  o = −0.76 V
Thus the overall reaction is:
Cu++ + Zn Cu + Zn++
The standard potential for the reaction is then The polarity of the cell is determined as follows. Zinc metal is more strongly reducing than copper metal because the standard (reduction) potential for zinc is more negative than that of copper. Thus, zinc metal will lose electrons to copper ions and develop a positive electrical charge. The equilibrium constant, , for the cell is given by:
where
is the Faraday constant,
is the gas constant, and
is the absolute temperature in Kelvins.
For the Daniell cell Thus, at equilibrium, a few electrons are transferred, enough to cause the electrodes to be charged.
Actual half-cell potentials must be calculated by using the Nernst equation as the solutes are unlikely to be in their standard states:
where is the reaction quotient. When the charges of the ions in the reaction are equal, this simplifies to:
where M is the activity of the metal ion in solution. In practice concentration in is used in place of activity. The metal electrode is in its standard state so by definition has unit activity. The potential of the whole cell is obtained as the difference between the potentials for the two half-cells, so it depends on the concentrations of both dissolved metal ions. If the concentrations are the same the Nernst equation is not needed, and under the conditions assumed here.
The value of is so at 25 °C (298.15 K)   the half-cell potential will change by only if the concentration of a metal ion is increased or decreased by a
These calculations are based on the assumption that all chemical reactions are in equilibrium. When a current flows in the circuit, equilibrium conditions are not achieved and the cell voltage will usually be reduced by various mechanisms, such as the development of overpotentials. Also, since chemical reactions occur when the cell is producing power, the electrolyte concentrations change and the cell voltage is reduced. A consequence of the temperature dependency of standard potentials is that the voltage produced by a galvanic cell is also temperature dependent.
Galvanic corrosion
Galvanic corrosion is the electrochemical erosion of metals. Corrosion occurs when two dissimilar metals are in contact with each other in the presence of an electrolyte, such as salt water. This forms a galvanic cell, with hydrogen gas forming on the more noble (less active) metal. The resulting electrochemical potential then develops an electric current that electrolytically dissolves the less noble material. A concentration cell can be formed if the same metal is exposed to two different concentrations of electrolyte.
Types
Concentration cell
Electrolytic cell
Electrochemical cell
Lemon battery
Thermogalvanic cell
| Physical sciences | Electrochemistry | Chemistry |
254769 | https://en.wikipedia.org/wiki/Research%20and%20development | Research and development | Research and development (R&D or R+D), known in some countries as experiment and design, is the set of innovative activities undertaken by corporations or governments in developing new services or products. R&D constitutes the first stage of development of a potential new service or the production process.
Although R&D activities may differ across businesses, the primary goal of an R&D department is to develop new products and services. R&D differs from the vast majority of corporate activities in that it is not intended to yield immediate profit, and generally carries greater risk and an uncertain return on investment. R&D is crucial for acquiring larger shares of the market through new products. R&D&I represents R&D with innovation.
Background
New product design and development is often a crucial factor in the survival of a company. In a global industrial landscape that is changing fast, firms must continually revise their design and range of products. This is necessary as well due to the fierce competition and the evolving preferences of consumers. Without an R&D program, a firm must rely on strategic alliances, acquisitions, and networks to tap into the innovations of others.
A system driven by marketing is one that puts the customer needs first, and produces goods that are known to sell. Market research is carried out, which establishes the needs of consumers and the potential niche market of a new product. If the development is technology driven, R&D is directed toward developing products to meet the unmet needs.
In general, research and development activities are conducted by specialized units or centers belonging to a company, or can be out-sourced to a contract research organization, universities, or state agencies. In the context of commerce, "research and development" normally refers to future-oriented, longer-term activities in science or technology, using similar techniques to scientific research but directed toward desired outcomes and with broad forecasts of commercial yield.
Statistics on organizations devoted to "R&D" may express the state of an industry, the degree of competition or the lure of progress. Some common measures include: budgets, numbers of patents or on rates of peer-reviewed publications. Bank ratios are one of the best measures, because they are continuously maintained, public and reflect risk.
In the United States, a typical ratio of research and development for an industrial company is about 3.5% of revenues; this measure is called "R&D intensity". A high technology company, such as a computer manufacturer, might spend 7% or a pharmaceutical companies such as Merck & Co. 14.1% or Novartis 15.1%. Anything over 15% is remarkable, and usually gains a reputation for being a high technology company such as engineering company Ericsson 24.9%, or biotech company Allergan, which tops the spending table with 43.4% investment. Such companies are often seen as credit risks because their spending ratios are so unusual.
Generally such firms prosper only in markets whose customers have extreme high technology needs, like certain prescription drugs or special chemicals, scientific instruments, and safety-critical systems in medicine, aeronautics or military weapons. The extreme needs justify the high risk of failure and consequently high gross margins from 60% to 90% of revenues. That is, gross profits will be as much as 90% of the sales cost, with manufacturing costing only 10% of the product price, because so many individual projects yield no exploitable product. Most industrial companies get 40% revenues only.
On a technical level, high tech organizations explore ways to re-purpose and repackage advanced technologies as a way of amortizing the high overhead. They often reuse advanced manufacturing processes, expensive safety certifications, specialized embedded software, computer-aided design software, electronic designs and mechanical subsystems.
Research from 2000 has shown that firms with a persistent R&D strategy outperform those with an irregular or no R&D investment program.
Business R&D
Research and development are very difficult to manage, since the defining feature of research is that the researchers do not know in advance exactly how to accomplish the desired result. As a result, "higher R&D spending does not guarantee more creativity, higher profit or a greater market share". Research is the most risky financing area because both the development of an invention and its successful realization carries uncertainty including the profitability of the invention. One way entrepreneurs can reduce these uncertainties is to buy the licence for a franchise, so that the know-how is already incorporated in the licence.
Benefit by sector
In general, it has been found that there is a positive correlation between the research and development and firm productivity across all sectors, but that this positive correlation is much stronger in high-tech firms than in low-tech firms. In research done by Francesco Crespi and Cristiano Antonelli, high-tech firms were found to have "virtuous" Matthew effects while low-tech firms experienced "vicious" Matthew effects, meaning that high-tech firms were awarded subsidies on merit while low-tech firms most often were given subsidies based on name recognition, even if not put to good use. While the strength of the correlation between R&D spending and productivity in low-tech industries is less than in high-tech industries, studies have been done showing non-trivial carryover effects to other parts of the marketplace by low-tech R&D.
Risks
Business R&D is risky for at least two reasons. The first source of risks comes from R&D nature, where R&D project could fail without residual values. The second source of risks comes from takeover risks, which means R&D is appealing to bidders because they could gain technologies from acquisition targets. Therefore, firms may gain R&D profit that co-moves with takeover waves, causing risks to the company which engages in R&D activity.
Global
Global R&D management is the discipline of designing and leading R&D processes globally, across cultural and lingual settings, and the transfer of knowledge across international corporate networks.
Government expenditures
United States
Former President Barack Obama requested $147.696 billion for research and development in FY2012, 21% of which was destined to fund basic research. According to National Science Foundation in U.S., in 2015, R&D expenditures performed by federal government and local governments are 54 and 0.6 billions of dollars. The federal research and development budget for fiscal year 2020 was $156 billion, 41.4% of which was for the Department of Defense (DOD). DOD's total research, development, test, and evaluation budget was roughly $108.5 billion.
Israel
Israel is the world leader in spending on R&D as a percentage of GDP as of 2022, spending 6.02%. According to CSIS, During the 1970s and 1980s Israel initially built up Israel's research infrastructure through various programs, often in the defence industry. In 1984, a law for Encouragement of Research and Development in Industry encouraged the commercial sector to invest in R&D in Israel as well as empowered the Office of Chief Scientist In the 1980s to 1992, the Chief scientist of Israel significantly expanded R&D subsidies in the Israeli industrial sector. Israel invested in the creation of clusters of startups in the high-tech sector as well as venture capital investments. In 1993, Israel initiated the Yozma program, which led to the doubling of value of Israel's 10 new venture capital funds in 3 years. In the late 1990s, Israel was second only to the US in private equity as a share of the general economy. The high tech sector in Israel, known as Silicon Wadi, which earned Israel the nickname - Start-up Nation, was ranked the 4th leading startup ecosystem in the world by Startup genome with a value of $253billion in 2023.
European Union
Europe is lagging behind in R&D investments from the past two decades. The target of 3% of gross domestic product (GDP) was meant to be reached by 2020, but the current amount is below this target. This also causes a digital divide among countries since only a few EU Member States have R&D spending.
Research and innovation in Europe are financially supported by the programme Horizon 2020, which is open to participation worldwide.
A notable example is the European environmental research and innovation policy, based on the Europe 2020 strategy which will run from 2014 to 2020, a multidisciplinary effort to provide safe, economically feasible, environmentally sound and socially acceptable solutions along the entire value chain of human activities.
Firms that have embraced advanced digital technology devote a greater proportion of their investment efforts to R&D. Firms who engaged in digitisation during the pandemic report spending a big portion of their expenditure in 2020 on software, data, IT infrastructure, and website operations. A 2021/2022 survey found that one in every seven enterprises in the Central, Eastern and South Eastern regions (14%) may be classed as active innovators — that is, firms that spent heavily in research and development and developed a new product, process, or service — however this figure is lower than the EU average of 18%. In 2022, 67% of enterprises in the same region deployed at least one sophisticated digital technology, and 69% EU firms did the same.
As of 2023, European enterprises account for 18% of the world's top 2 500 R&D corporations, but just 10% of new entrants, compared to 45% in the United States and 32% in China.
As of 2024, the electronics sector leads in R&D investment, with 28% of its total investment dedicated to it. This is followed by textiles (19%), digital (18%), and aerospace (15%). Other sectors allocate less than 10% of their total investment to R&D.
While 17% of the world’s top R&D investors are based in the European Union, they accounted for only 1% of acquisitions involving EU-based companies between 2013 and 2023.
Worldwide
In 2015, research and development constituted an average 2.2% of the global GDP according to the UNESCO Institute for Statistics.
By 2018, research and development constituted an average 1.79% of the global GDP according to the UNESCO Institute for Statistics. Countries agreed in 2015 to monitor their progress in raising research intensity (SDG 9.5.1), as well as researcher density (SDG 9.5.2), as part of their commitment to reaching the Sustainable Development Goals by 2030. However, this undertaking has not spurred an increase in reporting of data. On the contrary, a total of 99 countries reported data on domestic investment in research in 2015 but only 69 countries in 2018. Similarly, 59 countries recorded the number of researchers (in full-time equivalents) in 2018, down from 90 countries in 2015.
| Technology | General | null |
254777 | https://en.wikipedia.org/wiki/Isometry | Isometry | In mathematics, an isometry (or congruence, or congruent transformation) is a distance-preserving transformation between metric spaces, usually assumed to be bijective. The word isometry is derived from the Ancient Greek: ἴσος isos meaning "equal", and μέτρον metron meaning "measure". If the transformation is from a metric space to itself, it is a kind of geometric transformation known as a motion.
Introduction
Given a metric space (loosely, a set and a scheme for assigning distances between elements of the set), an isometry is a transformation which maps elements to the same or another metric space such that the distance between the image elements in the new metric space is equal to the distance between the elements in the original metric space.
In a two-dimensional or three-dimensional Euclidean space, two geometric figures are congruent if they are related by an isometry;
the isometry that relates them is either a rigid motion (translation or rotation), or a composition of a rigid motion and a reflection.
Isometries are often used in constructions where one space is embedded in another space. For instance, the completion of a metric space involves an isometry from into a quotient set of the space of Cauchy sequences on
The original space is thus isometrically isomorphic to a subspace of a complete metric space, and it is usually identified with this subspace.
Other embedding constructions show that every metric space is isometrically isomorphic to a closed subset of some normed vector space and that every complete metric space is isometrically isomorphic to a closed subset of some Banach space.
An isometric surjective linear operator on a Hilbert space is called a unitary operator.
Definition
Let and be metric spaces with metrics (e.g., distances) and A map is called an isometry or distance-preserving map if for any ,
An isometry is automatically injective; otherwise two distinct points, a and b, could be mapped to the same point, thereby contradicting the coincidence axiom of the metric d, i.e., if and only if . This proof is similar to the proof that an order embedding between partially ordered sets is injective. Clearly, every isometry between metric spaces is a topological embedding.
A global isometry, isometric isomorphism or congruence mapping is a bijective isometry. Like any other bijection, a global isometry has a function inverse.
The inverse of a global isometry is also a global isometry.
Two metric spaces X and Y are called isometric if there is a bijective isometry from X to Y.
The set of bijective isometries from a metric space to itself forms a group with respect to function composition, called the isometry group.
There is also the weaker notion of path isometry or arcwise isometry:
A path isometry or arcwise isometry is a map which preserves the lengths of curves; such a map is not necessarily an isometry in the distance preserving sense, and it need not necessarily be bijective, or even injective. This term is often abridged to simply isometry, so one should take care to determine from context which type is intended.
Examples
Any reflection, translation and rotation is a global isometry on Euclidean spaces. | Mathematics | Geometry: General | null |
254930 | https://en.wikipedia.org/wiki/Band-pass%20filter | Band-pass filter | A band-pass filter or bandpass filter (BPF) is a device that passes frequencies within a certain range and rejects (attenuates) frequencies outside that range.
It is the inverse of a band-stop filter.
Description
In electronics and signal processing, a filter is usually a two-port circuit or device which removes frequency components of a signal (an alternating voltage or current). A band-pass filter allows through components in a specified band of frequencies, called its passband but blocks components with frequencies above or below this band. This contrasts with a high-pass filter, which allows through components with frequencies above a specific frequency, and a low-pass filter, which allows through components with frequencies below a specific frequency. In digital signal processing, in which signals represented by digital numbers are processed by computer programs, a band-pass filter is a computer algorithm that performs the same function. The term band-pass filter is also used for optical filters, sheets of colored material which allow through a specific band of light frequencies, commonly used in photography and theatre lighting, and acoustic filters which allow through sound waves of a specific band of frequencies.
An example of an analogue electronic band-pass filter is an RLC circuit (a resistor–inductor–capacitor circuit). These filters can also be created by combining a low-pass filter with a high-pass filter.
A bandpass signal is a signal containing a band of frequencies not adjacent to zero frequency, such as a signal that comes out of a bandpass filter.
An ideal bandpass filter would have a completely flat passband: all frequencies within the passband would be passed to the output without amplification or attenuation, and would completely attenuate all frequencies outside the passband.
In practice, no bandpass filter is ideal. The filter does not attenuate all frequencies outside the desired frequency range completely; in particular, there is a region just outside the intended passband where frequencies are attenuated, but not rejected. This is known as the filter roll-off, and it is usually expressed in dB of attenuation per octave or decade of frequency. Generally, the design of a filter seeks to make the roll-off as narrow as possible, thus allowing the filter to perform as close as possible to its intended design. Often, this is achieved at the expense of pass-band or stop-band ripple.
The bandwidth of the filter is simply the difference between the upper and lower cutoff frequencies. The shape factor is the ratio of bandwidths measured using two different attenuation values to determine the cutoff frequency, e.g., a shape factor of 2:1 at 30/3 dB means the bandwidth measured between frequencies at 30 dB attenuation is twice that measured between frequencies at 3 dB attenuation.
Q factor
A band-pass filter can be characterized by its factor. The -factor is the reciprocal of the fractional bandwidth. A high- filter will have a narrow passband and a low- filter will have a wide passband. These are respectively referred to as narrow-band and wide-band filters.
Applications
Bandpass filters are widely used in wireless transmitters and receivers. The main function of such a filter in a transmitter is to limit the bandwidth of the output signal to the band allocated for the transmission. This prevents the transmitter from interfering with other stations. In a receiver, a bandpass filter allows signals within a selected range of frequencies to be heard or decoded, while preventing signals at unwanted frequencies from getting through. Signals at frequencies outside the band which the receiver is tuned at, can either saturate or damage the receiver. Additionally they can create unwanted mixing products that fall in band and interfere with the signal of interest. Wideband receivers are particularly susceptible to such interference. A bandpass filter also optimizes the signal-to-noise ratio and sensitivity of a receiver.
In both transmitting and receiving applications, well-designed bandpass filters, having the optimum bandwidth for the mode and speed of communication being used, maximize the number of signal transmitters that can exist in a system, while minimizing the interference or competition among signals.
Outside of electronics and signal processing, one example of the use of band-pass filters is in the atmospheric sciences. It is common to band-pass filter recent meteorological data with a period range of, for example, 3 to 10 days, so that only cyclones remain as fluctuations in the data fields.
Loudspeaker enclosures
Compound or band-pass
A 4th order electrical bandpass filter can be simulated by a vented box in which the contribution from the rear face of the driver cone is trapped in a sealed box, and the radiation from the front surface of the cone is into a ported chamber. This modifies the resonance of the driver. In its simplest form a compound enclosure has two chambers. The dividing wall between the chambers holds the driver; typically only one chamber is ported.
If the enclosure on each side of the woofer has a port in it then the enclosure yields a 6th order band-pass response. These are considerably harder to design and tend to be very sensitive to driver characteristics. As in other reflex enclosures, the ports may generally be replaced by passive radiators if desired.
An eighth order bandpass box is another variation which also has a narrow frequency range. They are often used in sound pressure level competitions, in which case a bass tone of a specific frequency would be used versus anything musical. They are complicated to build and must be done quite precisely in order to perform nearly as intended.
Economics
Bandpass filters can also be used outside of engineering-related disciplines. A leading example is the use of bandpass filters to extract the business cycle component in economic time series. This reveals more clearly the expansions and contractions in economic activity that dominate the lives of the public and the performance of diverse firms, and therefore is of interest to a wide audience of economists and policy-makers, among others.
Economic data usually has quite different statistical properties than data in say, electrical engineering. It is very common for a researcher to directly carry over traditional methods such as the "ideal" filter, which has a perfectly sharp gain function in the frequency domain. However, in doing so, substantial problems can arise that can cause distortions and make the filter output extremely misleading. As a poignant and simple case, the use of an "ideal" filter on white noise (which could represent for example stock price changes) creates a false cycle. The use of the nomenclature "ideal" implicitly involves a greatly fallacious assumption except on scarce occasions. Nevertheless, the use of the "ideal" filter remains common despite its limitations.
Fortunately, band-pass filters are available that steer clear of such errors, adapt to the data series at hand, and yield more accurate assessments of the business cycle fluctuations in major economic series like Real GDP, Investment, and Consumption - as well as their sub-components. An early work, published in the Review of Economics and Statistics in 2003, more effectively handles the kind of data (stochastic rather than deterministic) arising in macroeconomics. In this paper entitled "General Model-Based Filters for Extracting Trends and Cycles in Economic Time Series", Andrew Harvey and Thomas Trimbur develop a class of adaptive band pass filters. These have been successfully applied in various situations involving business cycle movements in myriad nations in the international economy.
4G and 5G wireless communications
Band pass filters can be implemented in 4G and 5G wireless communication systems. Hussaini et al.(2015) stated that, in the application of wireless communication, radio frequency noise is a major concern. In the current development of 5G technology, planer band pass filters are used to suppress RF noises and removing unwanted signals.
Combine, hairpin, parallel-coupled line, step impedance and stub impedance are the designs of experimenting the band pass filter to achieve low insertion loss with a compact size. The necessity of adopting asymmetric frequency response is in behalf of reducing the number of resonators, insertion loss, size and cost of circuit production.
4-pole cross-coupled band pass filter is designed by Hussaini et al.(2015). This band pass filter is designed to cover the 2.5-2.6 GHz and 3.4-3.7 GHz spectrum for the 4G and 5G wireless communication applications respectively. It is developed and extended from 3-pole single-band band pass filter, where an additional resonator is applied to a 3-pole single-band band pass filter. The advanced band pass filter has a compact size with a simple structure, which is convenient for implementation. Moreover, the stop band rejection and selectivity present a good performance in RF noise suppression. Insertion loss is very low when covering the 4G and 5G spectrum, while providing good return loss and group delay.
Energy scavengers
Energy scavengers are devices that search for energy from the environment efficiently. Band pass filters can be implemented to energy scavengers by converting energy generated from vibration into electric energy. The band pass filter designed by Shahruz (2005), is an ensemble of cantilever beams, which is called the beam-mass system. Ensemble of beam-mass systems can be transformed into a band pass filter when appropriate dimensions of beams and masses are chosen. Although the process of designing a mechanical band pass filter is advanced, further study and work are still required to design more flexible band pass filters to suit large frequency intervals. This mechanical band pass filter could be used on vibration sources with distinct peak-power frequencies.
Other fields
In neuroscience, visual cortical simple cells were first shown by David Hubel and Torsten Wiesel to have response properties that resemble Gabor filters, which are band-pass.
In astronomy, band-pass filters are used to allow only a single portion of the light spectrum into an instrument. Band-pass filters can help with finding where stars lie on the main sequence, identifying redshifts, and many other applications.
| Technology | Signal processing | null |
254938 | https://en.wikipedia.org/wiki/Scissors | Scissors | Scissors are hand-operated shearing tools. A pair of scissors consists of a pair of blades pivoted so that the sharpened edges slide against each other when the handles (bows) opposite to the pivot are closed. Scissors are used for cutting various thin materials, such as paper, cardboard, metal foil, cloth, rope, and wire. A large variety of scissors and shears all exist for specialized purposes. Hair-cutting shears and kitchen shears are functionally equivalent to scissors, but the larger implements tend to be called shears. Hair-cutting shears have specific blade angles ideal for cutting hair. Using the incorrect type of scissors to cut hair will result in increased damage or split ends, or both, by breaking the hair. Kitchen shears, also known as kitchen scissors, are intended for cutting and trimming foods such as meats.
Inexpensive, mass-produced modern scissors are often designed ergonomically with composite thermoplastic and rubber handles.
Terminology
The noun scissors is treated as a plural noun, and therefore takes a plural verb (e.g., these scissors are). Alternatively, the tool is referred to by the singular phrase a pair of scissors. The word shears is used to describe similar instruments that are larger in size and for heavier cutting.
History
The earliest known scissors appeared in Mesopotamia 3,000 to 4,000 years ago. These were of the 'spring scissor' type comprising two bronze blades connected at the handles by a thin, flexible strip of curved bronze which served to hold the blades in alignment, to allow them to be squeezed together, and to pull them apart when released.
Spring scissors continued to be used in Europe until the 16th century. However, pivoted scissors of bronze or iron, in which the blades were pivoted at a point between the tips and the handles, the direct ancestor of modern scissors, were invented by the Romans around 100 AD. They entered common use in not only ancient Rome, but also China, Japan, and Korea, and the idea is still used in almost all modern scissors.
Early manufacture
During the Middle Ages and Renaissance, spring scissors were made by heating a bar of iron or steel, then flattening and shaping its ends into blades on an anvil. The center of the bar was heated, bent to form the spring, then cooled and reheated to make it flexible.
The Hangzhou Zhang Xiaoquan Company in Hangzhou, China, has been manufacturing scissors since 1663.
William Whiteley & Sons (Sheffield) Ltd. was producing scissors by 1760, although it is believed the business began trading even earlier. The first trade-mark, 332, was granted in 1791. The company is still manufacturing scissors today, and is the oldest company in the West to do so.
Pivoted scissors were not manufactured in large numbers until 1761, when Robert Hinchliffe of Sheffield produced the first pair of modern-day scissors made of hardened and polished cast steel. His major challenge was to form the bows; first, he made them solid, then drilled a hole, and then filed away metal to make this large enough to admit the user's fingers. This process was laborious, and apparently Hinchliffe improved upon it in order to increase production. Hinchliffe lived in Cheney Square (now the site of Sheffield Town Hall), and set up a sign identifying himself as a "fine scissor manufacturer". He achieved strong sales in London and elsewhere.
During the 19th century, scissors were hand-forged with elaborately decorated handles. They were made by hammering steel on indented surfaces known as 'bosses' to form the blades. The rings in the handles, known as bows, were made by punching a hole in the steel and enlarging it with the pointed end of an anvil.
In 1649, in Swedish-ruled Finland, an ironworks was founded in the village of Fiskars between Helsinki and Turku. In 1830, a new owner started the first cutlery works in Finland, making, among other items, scissors with the Fiskars trademark.
Modern manufacturing regions
China
The vast majority of global scissor manufacturing takes place in China. As of 2019, China was responsible for 64.3% of worldwide scissors exports. When combined with Chinese Taipei exports, this rises to 68.3%. The primary scissors producing region in China is in Guandong Province.
The Hangzhou Zhang Xiaoquan Company, founded in 1663, is one of the oldest continuously operating scissor manufacturers in the world. The company was nationalized in 1958 and now employs 1500 people who annually mass-produce an estimated seven million pairs of inexpensive scissors that retail for an average of US$4 each.
France
In the late 14th century, the English word "scissors" came into usage. It was derived from the Old French word , which referred to shears.
There are several historically important scissor-producing regions in France: Haute-Marne in Nogent-en Bassigny, Châtellereault, Thiers and Rouen. These towns, like many other scissor-producing communities, began with sabre, sword and bayonet production, which transitioned to scissors and other blades in the late 18th and early 19th centuries.
Thiers, in the Puy-de-Dôme department of Auvergne, remains an important centre of scissor and cutlery production. It is home to both the Musée de la Coutellerie, which showcases the town's 800-year history of blade-making, as well as Coutellia, an industry tradeshow that advertises itself as one of the largest annual gatherings of artisanal blade-makers in the world.
Germany
Germany was responsible for manufacturing just under 7% of global scissors exports in 2019. Often called "The City of Blades", Solingen, in North Rhine-Westphalia, has been a center for the manufacturing of scissors since medieval times. At the end of the 18th century it's estimated that there were over 300 scissorsmiths in Solingen.
In 1995 the City of Solingen passed The Solingen Ordinance, an update to a 1930s law that decreed "Made in Solingen" stamps could only be applied to products almost entirely manufactured in the old industrial area of Solingen. In 2019 this applied to approximately 150 companies making high-quality blades of all kinds, including scissors.
Friedrich Herder, founded in Solingen in 1727, is one of the oldest scissors manufacturers still operating in Germany.
Italy
Premana, in Lecco Province, has its origins in ironworks and knife manufacturing beginning in the 16th century. In 1900 there were ten scissor manufacturing workshops, 20 in 1952 and 48 by 1960. Today, Consorzio Premax, an industrial partnership, organizes over 60 local companies involved in the manufacture of scissors for global markets. In 2019 Italy exported 3.5% of scissors manufactured globally.
One of the oldest Premanese scissor manufacturing firms still in operation is Sanelli Ambrogio, which was founded in 1869.
Japan
Scissormaking in Japan evolved from sword making in the 14th century. Seki, in Gifu Prefecture, was a renowned center of swordmaking beginning in the 1200s. After citizens were no longer permitted to carry swords, the city's blacksmiths turned to making scissors and knives. There are many specialized types of Japanese scissors, but sewing scissors were introduced by American Commodore Matthew Perry from the United States in 1854.
The Sasuke workshop in Sakai City south of Osaka is run by Yasuhiro Hirakawa, a 5th generation scissorsmith. The company has been in operation since 1867. Yasuhiro Hirakawa is the last traditional scissormaker in Japan, making scissors in the traditional style where the blades are believed to be thinner, lighter and sharper than European scissors. In 2018 he was profiled in a documentary that featured a pair of his bonsai snips which retailed for US$35,000.
Spain
In Solsona, Spain, scissor manufacturing began in the 16th century. At the industry's peak in the 18th century there were 24 workshops, organized as the Guild of Saint Eligius, the patron saint of knife makers. By the mid-1980s there were only two, and by 2021, Pallarès Solsona, founded in 1917 by Lluìs and Carles Pallarès Canal, and still family-operated, was the town's sole remaining artisanal scissor manufacturer.
United Kingdom
Sheffield was home to the first mass production of scissors beginning in 1761. By the 19th century there were an estimated 60 steel scissor companies in Sheffield. However, since the 1980s, industry globalization and a shift towards cheaper, mass-produced scissors created price deflation that many artisanal manufacturers could not compete with. The Sheffield scissor industry consisted of just two local companies in 2021.
The two remaining Sheffield scissor manufacturers are William Whiteley, founded in 1760, and Ernest Wright, which was established in 1902. Both now focus on high-end/niche crafting of "products for life" rather than mass production. Between these two firms it is estimated that there are no more than ten "putter-togetherers" or "putters" who are the master-trained craftspeople responsible for high quality Sheffield scissor assembly. In 2020, Ernest Wright was recognized with the Award for Endangered Crafts by the British Heritage Crafts Association.
Description and operation
A pair of scissors consists of two pivoted blades. In lower-quality scissors, the cutting edges are not particularly sharp; it is primarily the shearing action between the two blades that cuts the material. In high-quality scissors, the blades can be both extremely sharp, and tension sprung – to increase the cutting and shearing tension only at the exact point where the blades meet. The hand movement (pushing with the thumb, pulling with the fingers) can add to this tension. An ideal example is in high-quality tailor's scissors or shears, which need to be able to perfectly cut (and not simply tear apart) delicate cloths such as chiffon and silk.
Children's scissors are usually not particularly sharp, and the tips of the blades are often blunted or 'rounded' for safety.
Mechanically, scissors are a first-class double-lever with the pivot acting as the fulcrum. For cutting thick or heavy material, the mechanical advantage of a lever can be exploited by placing the material to be cut as close to the fulcrum as possible. For example, if the applied force (at the handles) is twice as far away from the fulcrum as the cutting location (i.e., the point of contact between the blades), the force at the cutting location is twice that of the applied force at the handles. Scissors cut material by applying at the cutting location a local shear stress which exceeds the material's shear strength.
Some scissors have an appendage, called a finger brace or finger tang, below the index finger hole for the middle finger to rest on to provide for better control and more power in precision cutting. A finger tang can be found on many quality scissors (including inexpensive ones) and especially on scissors for cutting hair (see hair scissors pictured below). In hair cutting, some claim the ring finger is inserted where some place their index finger, and the little finger rests on the finger tang.
For people who do not have the use of their hands, there are specially designed foot-operated scissors. Some quadriplegics can use a motorized mouth-operated style of scissor.
Right-handed and left-handed scissors
Most scissors are best suited for use with the right hand, but left-handed scissors are designed for use with the left hand. Because scissors have overlapping blades, they are not symmetric. This asymmetry is true regardless of the orientation and shape of the handles and blades: the blade that is on top always forms the same diagonal regardless of orientation. Human hands are asymmetric, and when closing the scissors the thumb and fingers do not close vertically, but have a lateral component to the motion. Specifically, the thumb pushes out from the palm and the fingers pull inwards. For right-handed scissors held in the right hand, the thumb blade is closer to the user's body, so that the natural tendency of the right hand is to push the cutting blades together. Conversely, if right-handed scissors are held in the left hand, the natural tendency of the left hand would be to push the cutting blades apart. Furthermore, with right-handed scissors held by the right hand, the shearing edge is visible, but when they are used with the left hand, the cutting edge of the scissors is behind the top blade, and the cutter cannot see what is being cut.
There are two varieties of left-handed scissors. Many common left-handed scissors (often called "semi"-left-handed scissors) simply have reversed finger grips. The blades open and close as with right-handed scissors, so that users tend to pull the blades apart as they are cutting. This can be challenging for craftspeople as the blades still obscure the cut. "True" left-handed scissors have both reversed finger grips and reversed blade layout, like mirror images of right-handed scissors. A left-handed person accustomed to using semi-left handed scissors may find using true left-handed scissors difficult at first, as they may have learned to rely heavily on the strength of their thumb to pull the blades apart vs. pushing the blades together in order to cut.
Some scissors are marketed as ambidextrous. These have symmetric handles so there is no distinction between the thumb and finger handles, and have very strong pivots so that the blades rotate without any lateral give. However, most "ambidextrous" scissors are in fact still right-handed in that the upper blade is on the right, and hence is on the outside when held in the right hand. Even if they cut successfully, the blade orientation will block the view of the cutting line for a left-handed person. True ambidextrous scissors are possible if the blades are double-edged and one handle is swung all the way around (to almost 360 degrees) so that what were the backs of the blades become the new cutting edges. was awarded for true ambidextrous scissors.
Specialized scissors and shears
Specialized scissors and shears include:
Gardening, agriculture and animal husbandry
Food and drug
Grooming
Metalwork
Medical
Ceremonial
Sewing and clothes-making
In popular culture
Due to their ubiquity across cultures and classes, scissors have numerous representations across world culture.
Art
Numerous art forms worldwide enlist scissors as a tool/material with which to accomplish the art. For cases where scissors appear in or are represented by the final art product, see Commons:Category:Scissors in art.
Film
Dead Again is a 1991 film starring Kenneth Branagh and Emma Thompson in a thriller revolving around repressed memories of scissors.
Edward Scissorhands is a 1990 film starring Johnny Depp as a young man who has hands made of multiple pairs of scissors.
Running with Scissors is a 2006 film based on the memoir of the same title.
Us is a 2019 psychological horror film directed by Jordan Peele about a family confronted by their scissor-wielding doppelgängers.
Games
The game Rock paper scissors involves two or more players making shapes with their hands to determine the outcome of the game. One of the three shapes, 'scissors', is made by extending the index and middle fingers to mimic the shape of most scissors.
In the horror video game franchise, Clock Tower, there is a character called Scissorman. Although the identity is usually taken by multiple individuals throughout the series, Scissorman is usually portrayed as a demonic serial killer with a giant pair of scissors, and kills anyone without showing any signs of mercy or remorse.
An anthropomorphic pair of scissors appears as a boss in Paper Mario: The Origami King. Various additions of scissor related activity appear as well, such as a variation of Rock paper scissors.
Literature
Heinrich Hoffmann's 1845 children's book Struwwelpeter includes Die Geschichte vom Daumenlutscher ("The Story of the Thumb-Sucker") in which a child continues to suck his thumbs despite his mother's warnings about The Great Tall Scissorman.
Augusten Burroughs' 2002 memoir Running with Scissors spent eight weeks on the New York Times best seller list. The book was later adapted into a film.
Music
Running with Scissors is the title of a 1999 album by "Weird Al" Yankovic.
The song "The Tailor Shop on Enbizaka (円尾坂の仕立屋 Enbizaka no Shitateya)" from Vocaloid producer Akuno-P tells a story about a tailor that kills a man and his family, whom she mistakes for her unfaithful lover and his three mistresses, using her sewing scissors.
The XTC song "Scissor Man", later covered by Primus.
"Save Your Scissors" – song by City and Colour.
The song "Scissors" by American Rock Band "Slipknot"
Proverbs
Proverbs about scissors are found in many language communities.
"Dull scissors don't cut straight." English
"An old bachelor is only half a pair of scissors." English
"A man without a woman like half a scissors, that would not cut but scratch." Romanian
"Scissors do not cut out the scissors' nail." Hungarian
"A face shaped like petals of the lotus, a voice as cool as sandal, a heart like a pair of scissors, and excessive humility, these are the signs of a rogue." Sanskrit
"Those who have scissors are many but those who sew are none." Pagu
"Spoon, fork, scissors, and lamp are not for little children." Volga German
Sport
The term 'scissor kick' may be found in several sports, including:
Scissor kick (strike), a generic martial arts term for any of a number of moves that may resemble the appearance or action of a pair of scissors.
Bicycle kicks in football are sometimes known as 'scissor kicks'.
Swimming strokes including the sidestroke incorporate a leg movement often known as a 'scissor kick'.
Superstition
Scissors have a widespread place in cultural superstitions. In many cases, the details of the superstition may be specific to a given country, region, tribe, religion or even situation.
Africa
In parts of North Africa, it was held that scissors could be used to curse a bridegroom. When the bridegroom was on horseback, the person enacting the curse would stand behind him with the scissors open and call his name. If the bridegroom answered to his name being called, the scissors would then be snapped shut and the bridegroom would be unable to consummate his marriage with his bride.
Asia
In Pakistan, some believe that scissors should never be idly opened and closed without purpose; this is believed to cause bad luck.
Western Europe
As iron was believed to ward off fairies, British parents traditionally hung a pair of iron scissors over cradles to keep fairies away. Sometimes the scissors were kept open to make the shape of a cross for extra protection.
North America
United States
In New Orleans, some believed that putting an open pair of scissors underneath your pillow at night was a sound method for sleeping well, even if one is cursed.
Eastern Europe
In some Eastern European countries, it is believed that leaving scissors open causes fights and disagreements within a household.
China
In China, it is believed that to give scissors to a friend or loved one is to be cutting ties with them.
Science
Scissors have been used in the sciences for various purposes, including descriptions of animals or natural features.
Nature
Animals named after scissors include:
Birds
The scissor-tailed flycatcher of North and Central America.
The scissor-tailed hummingbird
The scissor-tailed kite, a bird that is widespread throughout Africa.
The scissor-tailed nightjar of South America.
Fish
The scissor-tail rasbora, several species of fish that are commonly used for freshwater aquariums.
Gallery
| Technology | Hand tools | null |
254945 | https://en.wikipedia.org/wiki/Hinge | Hinge | A hinge is a mechanical bearing that connects two solid objects, typically allowing only a limited angle of rotation between them. Two objects connected by an ideal hinge rotate relative to each other about a fixed axis of rotation, with all other translations or rotations prevented; thus a hinge has one degree of freedom. Hinges may be made of flexible material or moving components. In biology, many joints function as hinges, such as the elbow joint.
History
Ancient remains of stone, marble, wood, and bronze hinges have been found. Some date back to at least Ancient Egypt, although it is nearly impossible pinpoint exactly where and when the first hinges were used.
In Ancient Rome, hinges were called cardō and gave name to the goddess Cardea and the main street Cardo. This name cardō lives on figuratively today as "the chief thing (on which something turns or depends)" in words such as cardinal.
According to the Oxford English Dictionary, the English word hinge is related to hang.
Door hinges
Barrel hinge A barrel hinge consists of a sectional barrel (the knuckle) secured by a pivot. A barrel is simply a hollow cylinder. The vast majority of hinges operate on the barrel principle.
Butt hinge / Mortise hinge Any hinge designed to be set into a door frame and/or door is considered a butt hinge or mortise hinge. A hinge can also be made as a half-mortise, where only one half is mortised and the other is not. Most mortise hinges are also barrel hinges because of how they pivot (i.e., a pair of leaves secured to each other by knuckles through which runs a pin).
Butterfly / Parliament (UK) hinge A decorative variety of barrel hinge with leaves somewhat resembling the wings of a butterfly.
Case hinge Similar to butt hinges, but usually more decorative; most commonly used in suitcases, briefcases, and the like.
Concealed hinge Used for furniture doors (with or without a self-closing features and/or damping systems), they consist of two parts: (1.) the cup and arm, and (2.) the mounting plate. They are also called "cup hinges", or "Euro hinges", as they were developed in Europe and use metric installation standards. Most concealed hinges offer the advantage of full in situ adjustability for standoff distance from the cabinet face, as well as pitch and roll by means of two screws on each hinge.
Continuous / Piano hinge This variety of barrel hinge runs the entire length of a door, panel, box, etc. They are manufactured with or without holes.
Flag hinge A simple two-part hinge, where a single leaf, attached to a pin, is inserted into a leaf with a hole. This allows the hinged objects to be easily removed (such as removable doors). They are made in right- and left-hand configurations.
H hinge These H-shaped barrel hinges are used on flush-mounted doors. Small H hinges () tend to be used for cabinets, while larger ones () are for passage doors and closet doors.
HL hinge Commonly used for passage doors, room doors, and closet doors in the 17th, 18th, and the 19th centuries. On taller doors, H hinges were occasionally used between them.
Pivot hinge This hinge pivots in openings in the floor and the top of the door frame. Also referred to as double-acting floor hinges, they are found in ancient dry stone buildings and, rarely, in old wooden buildings. They are a low-cost alternative for use with lightweight doors. Doors with these hinges may be called haar-hung doors.
Self-closing hinge This is a spring-loaded hinge with a speed control function. The same as spring hinge, usually use spring to provide force to close the door and provide a mechanical or hydraulic damper to control door close speed. That can prevent door slamming problem while auto closes a door.
Spring hinge A spring-loaded hinge that provides assistance in closing or opening the hinge leaves. An inner spring applies force to keep the hinge closed or opened.
Swing Clear hinge Also called offset door hinges, they are ideal for residential and commercial doors, they allow doors to swing completely clear of their openings. They can easily comply with Fair Housing Act (FHA) code by providing a minimum ADA 32” clearance when using a 34” door slab.
Living hinge A hinge of flexible plastic that creates a join between two objects without any knuckles or pins. Molded as a single piece, they never rust or squeak, and have several other advantages over other hinges, but are more susceptible to breakage.
Other types of hinges include:
Coach
Counter Flap
Cranked or storm-proof
Double action non-spring
Double action spring
Flush
Friction
Lift-off
Pinge (with a quick-release pin)
Rising butt
Security
Tee
Building access
Since at least medieval times, there have been hinges to draw bridges for defensive purposes for fortified buildings. Hinges are used in contemporary architecture where building settlement can be expected over the life of the building. For example, the Dakin Building in Brisbane, California, was designed with its entrance ramp on a large hinge to allow settlement of the building built on piles over bay mud. This device was effective until October 2006, when it was replaced due to damage and excessive ramp slope.
Large structures
Hinges appear in large structures such as elevated freeway and railroad viaducts, to reduce or eliminate the transfer of bending stresses between structural components, typically in an effort to reduce sensitivity to earthquakes. The primary reason for using a hinge, rather than a simpler device such as a slide, is to prevent the separation of adjacent components. When no bending stresses are transmitted across the hinge, it is called a zero moment hinge.
Spacecraft
A variety of self-actuating, self-locking hinges have been developed for spacecraft deployable structures such as solar array panels, synthetic aperture radar antennas, booms, radiators, etc.
Terminology
Components
Pin The rod that holds the leaves together, inside the knuckle. Also known as a pintle.
Knuckle The hollow—typically circular—portion creating the joint of the hinge through which the pin is set. The knuckles of either leaf typically alternate and interlock with the pin passing through all of them. (aka. loop, joint, node or curl)
Leaf The portions (typically two) that extend laterally from the knuckle and typically revolve around the pin.
Characteristics
End play Axial movement between the leaves along the axis of the pin. This motion allows the leaves to rotate without binding and is determined by the typical distance between knuckles (knuckle gap) when both edges of the leaves are aligned.
Gauge Thickness of the leaves.
Hinge width Length from the outer edge of one leaf to the outer edge of the other leaf, perpendicularly across the pin (aka open width).
Hinge length The length of the leaves parallel to the pin.
Knuckle length The typical length of an individual knuckle parallel to the pin.
Leaf width Length from the center of the pin to the outer edge of the leaf.
Pitch Distance from the end of a knuckle to the same edge of its adjacent knuckle on the same leaf
Door Stop A colloquialism referring to loose angular movement of the leaves relative to the pin.
Other types
Butler tray hinge Folds to 90 degrees and also snaps flat. They are for tables that have a tray top for serving.
Card table hinge Mortised into edge of antique or reproduction card tables and allow the top to fold onto itself.
Carpentier joint A hinge consisting of several thin metal strips of curved cross section.
Drop-leaf table hinge Mounted under the surface of a table with leaves that drop down. They are most commonly used with rule joints.
Hinged expansion joint an expansion joint with hinges that allow the unit to bend in a single plane
Hinged handcuffs a restraint device designed to secure an individual's wrists in proximity to each other consisting of two cuffs linked with a double or triple hinge. Hinged handcuffs cuffs tend to restrict movement more than chain-linked handcuffs, and they can be used to generate more leverage to force a suspect's hands behind the back, or to apply pain against the wrist, forcing the subject to comply and stop resisting.
Hinge region portion of antibody structure between the fragment antigen-binding region and the fragment crystallizable region
Living hinge a hinge consisting of material that flexes
Piano hinge (or coffin hinge) a long hinge, originally used for piano lids, but now used in many other applications where a long hinge is needed.
Gallery
| Technology | Mechanisms | null |
255084 | https://en.wikipedia.org/wiki/Sawshark | Sawshark | A sawshark or saw shark is a member of a shark order (Pristiophoriformes ) bearing a unique long, saw-like rostrum (snout or bill) edged with sharp teeth, which they use to slash and disable their prey. There are eight species within the Pristiophoriformes, including the longnose or common sawshark (Pristiophorus cirratus), shortnose sawshark (Pristiophorus nudipinnis), Japanese sawshark (Pristiophorus japonicas), Bahamas sawshark (Pristiophorus schroederi), sixgill sawshark (Pliotrema warreni), African dwarf sawshark (Pristiophorus nancyae), Lana's sawshark (Pristiophorus lanae) and the tropical sawshark (Pristiophorus delicatus).
Sawsharks are found in many areas around the world, most commonly in waters from the Indian Ocean to the southern Pacific Ocean. They are normally found at depths around 40–100 m, but can be found much lower in tropical regions. The Bahamas sawshark was discovered in deeper waters (640 m to 915 m) of the northwestern Caribbean.
Description and life cycle
Sawsharks have a pair of long barbels about halfway along the snout. They have two dorsal fins, but lack anal fins. Genus Pliotrema has six gill slits, and Pristiophorus the more usual five. The teeth of the saw typically alternate between large and small. Saw sharks reach a length of up to 5 feet and a weight of 18.7 pounds, with females tending to be slightly larger than males.
The body of a longnose saw shark is covered in tiny placoid scales: modified teeth covered in hard enamel. The body is a yellow-brown color which is sometimes covered in dark spots or blotches. This coloration allows the saw shark to easily blend with the sandy ocean floor.
These sharks typically feed on small fish, squid, and crustaceans, depending on species. The function of the sawshark barbels are not well understood, and neither is how they use their rostrum. It is possible they use it in a similar fashion as sawfishes, and hit prey with side-to-side swipes of the saw, crippling them. The saw could also be utilized against other predators in defense. The saw is covered with specialized sensory organs (ampullae of Lorenzini) which detect an electric field which is given off by buried prey.
Saw sharks life history is still poorly understood. Mating season occurs seasonally in coastal areas. Saw sharks are ovoviviparous meaning eggs hatch inside the mother. They have litters of 3–22 pups every 2 years. After 12 months of pregnancy, the pups are born at 30 cm long. While in the mother, pups' rostral teeth are angled backwards to avoid harming the mother. The life expectancy of sawsharks is still poorly understood, but they are thought to live to 10 years or more.
Human interaction
Among the different species of sawshark, all are listed on the IUCN Red List of 2017 as either data deficient or of least concern Saw sharks do not see much human interaction because of their deep habitats.
Species
There are currently ten known species of sawsharks across two genera in this family:
Pliotrema Regan, 1906
Pliotrema annae Weigmann, Gon, Leeney & Temple, 2020 (Anna's sixgill sawshark)
Pliotrema kajae Weigmann, Gon, Leeney & Temple, 2020 (Kaja's sixgill sawshark)
Pliotrema warreni Regan, 1906 (Warren's sixgill sawshark)
Pristiophorus J. P. Müller & Henle, 1837
Pristiophorus cirratus (Latham, 1794) (longnose sawshark or common sawshark)
Pristiophorus delicatus Yearsley, Last & White, 2008 (tropical sawshark)
Pristiophorus japonicus Günther, 1870 (Japanese sawshark)
Pristiophorus lanae Ebert & Wilms, 2013 (Lana's sawshark or Philippine Sawshark)
Pristiophorus nancyae Ebert & Cailliet, 2011 (African dwarf sawshark)
Pristiophorus nudipinnis Günther, 1870 (shortnose or southern sawshark)
Pristiophorus schroederi S. Springer & Bullis, 1960 (Bahamas sawshark)
Sixgill sawshark
The sixgill sawshark (Pliotrema warreni) is known for its six pairs of gills located on its sides close to the head. They are pale brown in color, with a white underbelly. Along with their color, their size sets them apart from the other types of sawfish: The females are around 136 cm where the males are around 112 cm. Sixgill sawsharks feed on shrimp, squid and bony fish. They are located around the southern portion of South Africa, and Madagascar. Where found, they are considered a prize catch. They dwell in the rage of 37–500 m, preferring to stay in the warmer water. They have between 5 and 7 pups from 7–17 eggs. They have these young in the range of 37–50 m deep to make sure the pups are warm.
Longnose or common sawshark
The longnose sawshark aka the common sawshark (Pristiophorus cirratus) is one of 9 species within the family Pristiophoridae. It has unique physical characteristics which include a long, thin, and flattened snout. Midway down the snout, nasal barbels protrude on both sides of the snout. Near the barbels, the longnose sawshark possesses a pair of ampullae of Lorenzini. It is unique among the sawshark family by having a longer snout than any of its counter species. The longnose sawshark is not very large with lengths ranging from around 14 inches at birth to 38 inches in males and 44 inches in females. They can also grow to a weight of 18.7 pounds. They are known to swim in the waters off the southern coast of Australia's continental shelf. They can also be found in the eastern portion of the Indian Ocean. The longnose sawshark prefers to swim in both the open sea and coastal regions from the surface to a depth of 600m. The longnose Sawshark is known to mainly prey on small crustaceans. It uses its barbels to detect prey on the ocean floor which it then hits with its snout to immobilize it. Like all other sawsharks, the common sawshark has a long snout with rows of small teeth and barbels on either side. It has five gill slits on either side of its head and between 19 and 25 teeth on each side. Sawsharks appear to be one of the types of elasmobranch that are difficult or impossible to age using most commonly-used approaches that rely on vertebral banding.
Shortnose or southern sawshark
The shortnose sawshark aka the southern sawshark (Pristiophorus nudipinnis) is found in south-eastern Australian waters. Much of its distribution overlaps with that of the Common sawshark, however, it seems to occur less frequently. This species is similar in size as the common sawshark, but has a broader rostrum (saw) and a more even brown coloration. It also grows to be heavier than the common sawshark. Since the color pattern of the common sawshark may be more or less defined, the easiest way to separate this species from the common sawshark is the location of the barbels, which are closer to the mouth than the co-occurring common sawshark. Unlike the common sawshark, the southern sawshark likely feeds mainly on fishes.
Tropical sawshark
The tropical sawshark (Pristiophorus delicatus) a pale brown with a yellow hue, and an underbelly that is a pale yellow to white. This deep water dwelling fish is located off the Northeastern shore of Australia, in depths up to 176–405 m. It averages in size at about 95 cm. Other than its location and appearance little is known of the creature; it is hard to catch due to its ability to travel into the depths of the ocean.
Japanese sawshark
The Japanese sawshark (Pristiophorus japonicus) is a species of sawshark that lives off the coast of Japan, Korea, and Northern China. It swims at a depth of 500 m. It has around 15–26 large rostral teeth in front of the barbels, which are equal distance from the gills to the snout, and about 9–17 teeth behind the barbels. Like all sawsharks, the Japanese sawshark is ovoviviparous, and feeds on crustaceans and bottom dwelling organisms.
Lana's sawshark
Lana's sawshark (Pristiophorus lanae) is a species of sawshark that inhabits the Philippine coast. It was discovered in 1966 by Dave Ebert, who distinguished it as a new species of sawshark based on its number of rostral teeth. Lana's sawshark was named after Lana Ebert on the occasion of her graduation from the University of Francisco. It has a dark uniform brown color on the dorsal side and a pale white on the ventral side. It is slender bodied, has five gills on each side, and can grow to be around 70 cm.
African dwarf sawshark
The African dwarf sawshark (Pristiophorus nancyae) is a small five-gill sawshark that lives off the coast of Mozambique. It was first discovered in 2011 when a specimen was caught off the coast of Mozambique at a depth of 1,600 ft. The African dwarf sawshark has since then been spotted off the coasts of Kenya and Yemen. It can be distinguished from other sawsharks by its location, and by having its barbels closer to its mouth than the end of its rostrum. It has a brownish grey color and becomes white along the ventral side. Little else is known about the African Dwarf Sawshark as it is a newly discovered species.
Shortnose sawshark
The shortnose sawshark (Pristiophorus nudipinnis) is similar to the longnose sawshark; however, it has a slightly compressed body and shorter more narrow rostrum. It has 13 teeth in front of its barbels and 6 behind. The shortnose sawshark tends to be uniformly slate grey with no markings on its dorsal side and pale white or cream on its ventral side. Females reach around 124 cm (49 in) long, and males reach around 110 cm (43 in) long. These sharks can live to be up to 9 years old. Like other sawsharks, the Short Nose lives a benthic lifestyle and feeds on benthic invertebrates. It uses its barbels to detect life on the ocean floor which it then paralyzes with its rostrum. The species is ovoviviparous and tends to give birth to a litter of 7–14 pups biannually. It inhabits ocean floors off the coast of Australia.
Bahamas sawshark
The Bahamas sawsharks (Pristiophorus schroeder) have very little information on them. Studies are being done daily to learn more about the deep sea dweller. They are located near Cuba, Florida, and the Bahamas (hence their name) where they dwell in the depths of 400–1000 m. As far as their appearance they can be identified by their snouts with teeth which appear as a saw, as well as their length, they are averaged at 80 cm in length.
Comparison with sawfish
Saw sharks and sawfish are cartilaginous fish possessing large saws. These are the only two fish that have a long blade-like snout. Although they are similar in appearances, saw sharks are distinct from sawfish. Sawfish are not sharks, but a type of ray. The gill slits of the sawfishes are positioned on the underside like a ray, but the gill slits of the saw shark are positioned on the side like a shark. Sawfish can have a much larger size, lack barbels, and have evenly sized teeth rather than alternating teeth of the saw shark. Clear difference is that a sawfish has no barbels and a saw shark has a prominent pair halfway along the saw. The saw shark uses these like other bottom fish, as a kind of antennae, feeling the way along the ocean bottom until it finds some prey of interest. Both the saw shark and the sawfish utilize the electroreceptors on the saw, ampullae of Lorenzini, to detect the electric field given off by buried prey.
| Biology and health sciences | Sharks | Animals |
255158 | https://en.wikipedia.org/wiki/Grain%20elevator | Grain elevator | A grain elevator or grain terminal is a facility designed to stockpile or store grain. In the grain trade, the term "grain elevator" also describes a tower containing a bucket elevator or a pneumatic conveyor, which scoops up grain from a lower level and deposits it in a silo or other storage facility.
In most cases, the term "grain elevator" also describes the entire elevator complex, including receiving and testing offices, weighbridges, and storage facilities. It may also mean organizations that operate or control several individual elevators, in different locations. In Australia, the term describes only the lifting mechanism.
Before the advent of the grain elevator, grain was usually handled in bags rather than in bulk (large quantities of loose grain). The Dart elevator was a major innovation—it was invented by Joseph Dart, a merchant, and Robert Dunbar, an engineer, in 1842, in Buffalo, New York. Using the steam-powered flour mills of Oliver Evans as their model, they invented the marine leg, which scooped loose grain out of the hulls of ships and elevated it to the top of a marine tower.
Early grain elevators and bins were often built of framed or cribbed wood, and were prone to fire. In 1899 Frank H. Peavey "The Elevator King' along with Charles F. Haglin, invented the modern grain elevator. The first Peavey-Haglin Experimental Concrete Grain Elevator still stands today in St. Louis Park, Minnesota. The Peavey invented elevator was the first cylindrical concrete grain elevator in the world and is now widely used across Canada and the US.
Grain elevator bins, tanks, and silos are now usually made of steel or reinforced concrete. Bucket elevators are used to lift grain to a distributor or consignor, from which it falls through spouts and/or conveyors and into one or more bins, silos, or tanks in a facility. When desired, silos, bins, and tanks are emptied by gravity flow, sweep augers, and conveyors. As grain is emptied from bins, tanks, and silos, it is conveyed, blended, and weighted into trucks, railroad cars, or barges for shipment.
Usage and definitions
In Australian English, the term "grain elevator" is reserved for elevator towers, while a receival and storage building or complex is distinguished by the formal term "receival point" or as a "wheat bin" or "silo". Large-scale grain receival, storage, and logistics operations are known in Australia as bulk handling.
In Canada, the term "grain elevator" is used to refer to a place where farmers sell grain into the global grain distribution system, and/or a place where the grain is moved into rail cars or ocean-going ships for transport. Specifically, several types of grain elevators are defined under Canadian law, in the Canadian Grain Act, section 2.
Primary elevators (called "country elevators" before 1971) receive grain directly from producers for storage, forwarding, or both.
Process elevators (called "mill elevators" before 1971) receive and store grain for direct manufacture or processing into other products.
Terminal elevators receive grain on or after official inspection and weighing and clean, store, and treat grain before moving it forward.
Transfer elevators (including "Eastern elevators" from the pre-1971 classification) transfer grain that has been officially inspected and weighed at another elevator. In the Eastern Division, transfer elevators also receive, clean, and store eastern or foreign grain.
History
Both necessity and the prospect of making money gave birth to the steam-powered grain elevator in Buffalo, New York, in 1843. Due to the completion of the Erie Canal in 1825, Buffalo enjoyed a unique position in American geography. It stood at the intersection of two great all-water routes; one extended from New York Harbor, up the Hudson River to Albany, and beyond it, the Port of Buffalo; the other comprised the Great Lakes, which could theoretically take boaters in any direction they wished to go (north to Canada, west to Michigan or Wisconsin, south to Toledo and Cleveland, or east to the Atlantic Ocean). All through the 1830s, Buffalo benefited tremendously from its position. In particular, it was the recipient of most of the increasing quantities of grain (mostly wheat) that was being grown on farms in Ohio and Indiana, and shipped on Lake Erie for trans-shipment to the Erie Canal. If Buffalo had not been there, or when things got backed up there, that grain would have been loaded onto boats at Cincinnati and shipped down the Mississippi River to New Orleans.
By 1842, Buffalo's port facilities clearly had become antiquated. They still relied upon techniques that had been in use since the European Middle Ages; work teams of stevedores use block and tackles and their own backs to unload or load each sack of grain that had been stored ashore or in the boat's hull. Several days, sometimes even a week, were needed to serve a single grain-laden boat. Grain shipments were going down the Mississippi River, not over the Great Lakes/Erie Canal system.
A merchant named Joseph Dart Jr., is generally credited as being the one who adapted Oliver Evans' grain elevator (originally a manufacturing device) for use in a commercial framework (the trans-shipment of grain in bulk from lakers to canal boats), but the actual design and construction of the world's first steam-powered "grain storage and transfer warehouse" was executed by an engineer named Robert Dunbar. Thanks to the historic Dart's Elevator (operational on 1 June 1843), which worked almost seven times faster than its nonmechanized predecessors, Buffalo was able to keep pace with—and thus further stimulate—the rapid growth of American agricultural production in the 1840s and 1850s, but especially after the Civil War, with the coming of the railroads.
The world's second and third grain elevators were built in Toledo, Ohio, and Brooklyn, New York, in 1847. These fledgling American cities were connected through an emerging international grain trade of unprecedented proportions. Grain shipments from farms in Ohio were loaded onto ships by elevators at Toledo; these ships were unloaded by elevators at Buffalo that shipped their grain to canal boats (and, later, rail cars), which were unloaded by elevators in Brooklyn, where the grain was either distributed to East Coast flour mills or loaded for further shipment to England, the Netherlands, or Germany. This eastern flow of grain, though, was matched by an equally important flow of people and capital in the opposite direction, that is, from east to west. Because of the money to be made in grain production, and of course, because of the existence of an all-water route to get there, increasing numbers of immigrants in Brooklyn came to Ohio, Indiana, and Illinois to become farmers. More farmers meant more prairies turned into farmlands, which in turn meant increased grain production, which of course meant that more grain elevators would have to be built in places such as Toledo, Buffalo, and Brooklyn (and Cleveland, Chicago, and Duluth). Through this loop of productivity set in motion by the invention of the grain elevator, the United States became a major international producer of wheat, corn, and oats.
In the early 20th century, concern arose about monopolistic practices in the grain elevator industry, leading to testimony before the Interstate Commerce Commission in 1906. This led to several grain elevators being burned down in Nebraska, allegedly in protest.
Today, grain elevators are a common sight in the grain-growing areas of the world, such as the North American prairies. Larger terminal elevators are found at distribution centers, such as Chicago and Thunder Bay, Ontario, where grain is sent for processing, or loaded aboard trains or ships to go further afield.
Buffalo, New York, the world's largest grain port from the 1850s until the first half of the 20th century, once had the United States' largest capacity for the storage of grain in over 30 concrete grain elevators located along the inner and outer harbors. While several are still in productive use, many of those that remain are presently idle. In a nascent trend, some of the city's inactive capacity has recently come back online, with an ethanol plant started in 2007 using one of the previously mothballed elevators to store corn. In the early 20th century, Buffalo's grain elevators inspired modernist architects such as Le Corbusier, who exclaimed, "The first fruits of the new age!" when he first saw them. Buffalo's grain elevators have been documented for the Historic American Engineering Record and added to the National Register of Historic Places. Currently, Enid, Oklahoma, holds the title of most grain storage capacity in the United States.
In farming communities, each town had one or more small grain elevators that served the local growers. The classic grain elevator was constructed with wooden cribbing and had nine or more larger square or rectangular bins arranged in 3 × 3 or 3 × 4 or 4 × 4 or more patterns. Wooden-cribbed elevators usually had a driveway with truck scale and office on one side, a rail line on the other side, and additional grain-storage annex bins on either side.
In more recent times with improved transportation, centralized and much larger elevators serve many farms. Some of them are quite large. Two elevators in Kansas (one in Hutchinson and one in Wichita) are half a mile long. The loss of the grain elevators from small towns is often considered a great change in their identity, and efforts to preserve them as heritage structures are made. At the same time, many larger grain farms have their own grain-handling facilities for storage and loading onto trucks.
Elevator operators buy grain from farmers, either for cash or at a contracted price, and then sell futures contracts for the same quantity of grain, usually each day. They profit through the narrowing "basis", that is, the difference between the local cash price, and the futures price, that occurs at certain times of the year.
Before economical truck transportation was available, grain elevator operators sometimes used their purchasing power to control prices. This was especially easy, since farmers often had only one elevator within a reasonable distance of their farms. This led some governments to take over the administration of grain elevators. An example of this is the Saskatchewan Wheat Pool. For the same reason, many elevators were purchased by cooperatives.
A recent problem with grain elevators is the need to provide separate storage for ordinary and genetically modified grain to reduce the risk of accidental mixing of the two.
In the past, grain elevators sometimes experienced silo explosions. Fine powder from the millions of grains passing through the facility would accumulate and mix with the oxygen in the air. A spark could spread from one floating particle to the other, creating a chain reaction that would destroy the entire structure. (This dispersed-fuel explosion is the mechanism behind fuel-air bombs.) To prevent this, elevators have very rigorous rules against smoking or any other open flame. Many elevators also have various devices installed to maximize ventilation, safeguards against overheating in belt conveyors, legs, bearings, and explosion-proof electrical devices such as electric motors, switches, and lighting.
Grain elevators in small Canadian communities often had the name of the community painted on two sides of the elevator in large block letters, with the name of the elevator operator emblazoned on the other two sides. This made identification of the community easier for rail operators (and incidentally, for lost drivers and pilots). The old community name often remained on an elevator long after the town had either disappeared or been amalgamated into another community; the grain elevator at Ellerslie, Alberta, remained marked with its old community name until it was demolished, which took place more than 20 years after the village had been annexed by Edmonton.
One of the major historical trends in the grain trade has been the closure of many smaller elevators and the consolidation of the grain trade to fewer places and among fewer companies. For example, in 1961, 1,642 "country elevators" (the smallest type) were in Alberta, holding of grain. By 2010, only 79 "primary elevators" (as they are now known) remained, holding .
Despite this consolidation, overall storage capacity has increased in many places. In 2017, the United States had of storage capacity, a growth of 25% over the previous decade.
Elevator Alley
The city of Buffalo is not only the birthplace of the modern grain elevator, but also has the world's largest number of extant examples. A number of the city's historic elevators are clustered along "Elevator Alley", a narrow stretch of the Buffalo River immediately adjacent to the harbor. The alley runs under Ohio Street and along Childs Street in the city's First Ward neighborhood.
Elevator row
In Canada, the term "elevator row" refers to a row of four or more wood-crib prairie grain elevators.
In the early pioneer days of Western Canada's prairie towns, when a good farming spot was settled, many people wanted to make money by building their own grain elevators. This brought in droves of private grain companies. Towns boasted dozens of elevator companies, which all stood in a row along the railway tracks. If a town were lucky enough to have two railways, it was to be known as the next Montreal. Many elevator rows had two or more elevators of the same company. Small towns bragged of their large elevator rows in promotional pamphlets to attract settlers. With so much competition in the 1920s, consolidation began almost immediately, and many small companies were merged or absorbed into larger companies.
In the mid-1990s, with the cost of grain so low, many private elevator companies once again had to merge, this time causing thousands of "prairie sentinels" to be torn down. Because so many grain elevators have been torn down, Canada has only two surviving elevator rows; one located in Inglis, Manitoba, and the other in Warner, Alberta. The Inglis Grain Elevators National Historic Site has been protected as a National Historic Sites of Canada. The Warner elevator row is, as of 2019, not designated a historic site, and is still in use as commercial grain elevators.
Elevator companies
Australia
ABB Grain was founded as a mutual company, the Australian Barley Board, in 1939, by barley growers in South Australia and Victoria; after demutualization, it was acquired by Viterra (see below) in 2009; Australian Bulk Alliance, a joint venture between ABB and Sumitomo, operates facilities in some areas.
CBH Group, a co-operative company, was established by grain growers in Western Australia, in 1933.
GrainCorp was established by the government of New South Wales in 1917, as Government Grain Elevator, and was privatized in 1992.
Canada
All companies operating elevators in Canada are licensed by the Canadian Grain Commission.
Agricore United was taken over by Saskatchewan Wheat Pool in 2007.
Alberta Farmers' Co-operative Elevator Company merged into United Grain Growers in 1917.
Alberta Pacific Grain Company was taken over by Federal Grain Co. in 1967.
Alberta Wheat Pool merged with Manitoba Pool Elevators in 1997.
Cargill was established in 1865 by W.W. Cargill.
Federal Grain was sold to the three provincial wheat pools in 1972.
Grain Growers' Grain Company merged into United Grain Growers in 1917.
Lake of the Woods Milling Company
Manitoba Pool Elevators merged with Alberta Wheat Pool in 1997.
Parrish & Heimbecker was established in 1909 by the two families of William Parrish and Norman G. Heimbecker.
Paterson Grain was established in 1908 as the N. M. Paterson Co.
Richardson International was established in 1857 by James Richardson; it is also known as Richardson Pioneer.
Saskatchewan Co-operative Elevator Company was taken over by the Saskatchewan Wheat Pool in 1926.
Saskatchewan Wheat Pool took over Agricore United in 2007 to form Viterra.
United Grain Growers was taken over by Agricore United in 2001.
Viterra was established after the take-over of Agricore United by the Saskatchewan Wheat Pool.
Sweden
In Sweden, the vast majority of grain elevators belong to the Lantmännen co-operative movement, owned by grain-growing farmers.
United States
ADM Milling
Cargill
General Mills
Monarch Engineering Co. (builder)
Montana Elevator Co.
Perdue Agribusiness
Scoular
Smithfield Grain
Southern States Cooperative
Tyson
United Grain Growers
Denmark
FM Bulk Handling - Fjordvejs
Notable grain elevators
This is a list of grain elevators that are either in the process of becoming heritage sites or museums, or have been preserved for future generations.
Canada
Alberta
Acadia Valley – Prairie Elevator Museum, former Alberta Wheat Pool converted into a tea house and museum
Alberta Central Railroad Museum – former Alberta Wheat Pool, second-oldest standing grain elevator in Alberta, moved from Hobbema
Castor – former Alberta Pacific, restored into a museum
Big Valley – Alberta Wheat Pool used as a museum complete with a train station and roundhouse
Edmonton – Ritchie Mill, former flour mill converted into restaurants, law offices, and condominiums
Heritage Acres Farm Museum – restored United Grain Growers elevator moved from Brocket
Heritage Park Historical Village, former Security Elevator Co. Ltd. moved from Shonts
Leduc – former Alberta Wheat Pool saved from demolition now a museum
Mayerthorpe – 1966 Federal Grain Co., now an interpretive center
Meeting Creek – a refurbished Alberta Wheat Pool, Pacific Grain elevator and CN train station
Nanton – Canadian Grain Elevator Discovery Centre, three elevators saved from demolition and preserved to educate visitors about the town's, and Alberta's, agricultural history
Radway – Krause Milling Co. restored into a museum
Scandia – Scandia Eastern Irrigation District Museum, 1920s Alberta Wheat Pool and stockyard now a museum
South Peace Centennial Museum, United Grain Growers moved from Albright
Spruce Grove – Spruce Grove Grain Elevator Museum, former Alberta Wheat Pool, now a museum
St. Albert – St. Albert Grain Elevator Park, a 1906 Alberta Grain Co. and 1929 Alberta Wheat Pool Elevators now restored as a historic park
Stettler – a 1920 Parrish and Heimbecker grain elevator, feed mill, and coal shed, last to stand in Alberta, now protected and restored as a museum
Ukrainian Cultural Heritage Village – former Home Grain Co. moved from Bellis
British Columbia
Creston – former Alberta Wheat Pool (1936) and United Grain Growers (1937) elevators on the edge of the downtown core in the Creston Valley. The two buildings were purchased by the Columbia Basin Trust in 2018. The wheat pool elevator was extensively refurbished and now includes an art gallery. The UGG elevator is beyond feasible conservation efforts however, and CBT has begun to deconstruct it in 2024, with care taken to re-purpose as much of the building materials as possible, including valuable first-growth timbers and historic equipment.
Manitoba
Inglis – Inglis Grain Elevators National Historic Site, last surviving elevator row in Manitoba with a total of four elevators. Now designated and protected as a National Historic Site of Canada
Niverville – Western Canada's first grain elevator, erected by William Hespeler in 1879
Quebec
Silo No. 5, Montreal – This grain elevator was completed in four stages from 1906 to 1959 and was abandoned in 1994. With the demolition of Silo No. 1 and Silo No. 2, Silo No. 5 is now, along with the Old Port's conveyor pier tower, the last vestige of Old Montreal's 20th-century harbour panorama.
Saskatchewan
Sukanen Ship Pioneer Village and Museum – former Victoria – McCabe moved from Mawer
North Battleford Western Development Museum, former Saskatchewan Wheat Pool moved from Keatley
South Africa
Port of Cape Town – once the tallest building in Cape Town, now restored to become the Zeitz Museum of Contemporary Art Africa
Switzerland
Swissmill Tower in upper Limmat Valley in the Canton of Zürich – high, rebuilt by April 2016.
United Kingdom
The Manchester Ship Canal grain elevator was completed in 1898. It had a capacity of 40,000 tons and its automatic conveying and spouting system could distribute grain into 226 bins.
United States
Maryland
Baltimore and Ohio Locust Point Grain Terminal Elevator, one of the largest grain terminal elevators to be constructed in the early 20th century, with a capacity of in Baltimore, Maryland
New York
American Grain Complex, built between 1905 and 1931
Cargill Pool Elevator, previously named the Saskatchewan Cooperative Elevator was built in 1925 offered a total holding capacity of in 135 bins
Concrete-Central Elevator, Buffalo, New York – The largest transfer elevator in the world at the time of its completion in 1917
Great Northern Elevator, built in 1897 by the Great Northern Railroad; demolished September 2022-May 2023.
Wollenberg Grain and Seed Elevator – wooden "country style" elevator formerly located in Buffalo, New York; destroyed by fire in October 2006
Illinois
Armour's Warehouse – constructed in 1861–62 on the north bank of the Illinois-Michigan Canal in Seneca, Illinois
Iowa
Historic Ely Elevator - Also known as the Woitishek/King/Krob elevator and feed mill. Constructed in 1900 in Ely, Iowa, and was in continuous use for 121 years.
Minnesota
Ceresota Building was a receiving and public grain elevator built by the Northwestern Consolidated Milling Company in 1908 in Minneapolis, Minnesota
Peavey–Haglin Experimental Concrete Grain Elevator, St. Louis Park, Minnesota, built in 1899–1900
Saint Paul Municipal Grain Terminal, in St. Paul, Minnesota, on the NRHP
North Dakota
North Dakota Mill and Elevator, largest flour mill in the United States, located in Grand Forks, North Dakota
Oklahoma
Ingersoll Tile Elevator, elevator constructed of hollow red clay tiles, located in Ingersoll, Oklahoma
Pennsylvania
Reading Company Grain Elevator, export elevator in Philadelphia converted into offices
South Dakota
Zip Feed Tower, tallest occupiable structure in South Dakota from its construction in 1956–1957 until its demolition in December 2005
Virginia
Sewell's Point grain elevator, export elevator built by the city of Norfolk in 1922 to help the port of Norfolk better compete with other East Coast ports by providing a publicly owned facility to store and load grain at reasonable rates. It was sold to the Norfolk and Western railroad in 1929, and leased from N&W by Continental grain in 1952. The elevator originally held but was later expanded to . The elevator was taken over by Cargill in the late 1980s and abandoned around the turn of the 21st century. The elevator was demolished by Norfolk Southern in 2008.
Southern States silos, a grain elevator in Richmond, Virginia originally built in the 1940s by Cargill, and currently leased by Perdue Farms is the tallest structure south of the James River in the city of Richmond. The elevator was the site of the 3rd RVA Street Art Festival.
Wisconsin
Chase Grain Elevator, tile grain elevator built in 1922. Sun Prairie, Wisconsin Placed on the National Register of Historic Places in 2010. It is the last remaining tile elevator in Wisconsin.
Wyoming
Sheridan Flouring Mills, Inc., an industrial complex in Sheridan, Wyoming
Elevator explosions
Given a large enough suspension of combustible flour or grain dust in the air, a significant explosion can occur. The 1878 explosion of the Washburn "A" Mill in Minneapolis, Minnesota, killed 18, leveled two nearby mills, damaged many others, and caused a destructive fire that gutted much of the nearby milling district. (The Washburn "A" mill was later rebuilt and continued to be used until 1965.) Another example occurred in 1998, when the DeBruce grain elevator in Wichita, Kansas, exploded and killed seven people. An explosion on October 29, 2011, at the Bartlett Grain Company in Atchison, Kansas, killed six people. Two more men received severe burns, but the remaining four were not hurt.
Almost any finely divided organic substance becomes an explosive material when dispersed as an air suspension; hence, a very fine flour is dangerously explosive in air suspension. This poses a significant risk when milling grain to produce flour, so mills go to great lengths to remove sources of sparks. These measures include carefully sifting the grain before it is milled or ground to remove stones, which could strike sparks from the millstones, and the use of magnets to remove metallic debris able to strike sparks.
The earliest recorded flour explosion took place in an Italian mill in 1785, but many have occurred since. These two references give numbers of recorded flour and dust explosions in the United States in 1994: and 1997 In the ten-year period up to and including 1997, there were 129 explosions.
Media
Canadian Prairie grain elevators were the subjects of the National Film Board of Canada documentaries Grain Elevator and Death of a Skyline.
During the sixth season of the History Channel series Ax Men, one of the featured crews takes on the job of dismantling the Globe Elevator in Wisconsin. This structure was the largest grain-storage facility in the world when it was built in the 1880s.
| Technology | Buildings and infrastructure | null |
255244 | https://en.wikipedia.org/wiki/Seawater | Seawater | Seawater, or sea water, is water from a sea or ocean. On average, seawater in the world's oceans has a salinity of about 3.5% (35 g/L, 35 ppt, 600 mM). This means that every kilogram (roughly one liter by volume) of seawater has approximately of dissolved salts (predominantly sodium () and chloride () ions). The average density at the surface is 1.025 kg/L. Seawater is denser than both fresh water and pure water (density 1.0 kg/L at ) because the dissolved salts increase the mass by a larger proportion than the volume. The freezing point of seawater decreases as salt concentration increases. At typical salinity, it freezes at about . The coldest seawater still in the liquid state ever recorded was found in 2010, in a stream under an Antarctic glacier: the measured temperature was .
Seawater pH is typically limited to a range between 7.5 and 8.4. However, there is no universally accepted reference pH-scale for seawater and the difference between measurements based on different reference scales may be up to 0.14 units.
Properties
Salinity
Although the vast majority of seawater has a salinity of between 31 and 38 g/kg, that is 3.1–3.8%, seawater is not uniformly saline throughout the world. Where mixing occurs with freshwater runoff from river mouths, near melting glaciers or vast amounts of precipitation (e.g. monsoon), seawater can be substantially less saline. The most saline open sea is the Red Sea, where high rates of evaporation, low precipitation and low river run-off, and confined circulation result in unusually salty water. The salinity in isolated bodies of water can be considerably greater still about ten times higher in the case of the Dead Sea. Historically, several salinity scales were used to approximate the absolute salinity of seawater. A popular scale was the "Practical Salinity Scale" where salinity was measured in "practical salinity units (PSU)". The current standard for salinity is the "Reference Salinity" scale with the salinity expressed in units of "g/kg".
Density
The density of surface seawater ranges from about 1020 to 1029 kg/m3, depending on the temperature and salinity. At a temperature of 25 °C, the salinity of 35 g/kg and 1 atm pressure, the density of seawater is 1023.6 kg/m3. Deep in the ocean, under high pressure, seawater can reach a density of 1050 kg/m3 or higher. The density of seawater also changes with salinity. Brines generated by seawater desalination plants can have salinities up to 120 g/kg. The density of typical seawater brine of 120 g/kg salinity at 25 °C and atmospheric pressure is 1088 kg/m3.
pH value
The pH value at the surface of oceans in pre-industrial time (before 1850) was around 8.2. Since then, it has been decreasing due to a human-caused process called ocean acidification that is related to carbon dioxide emissions: Between 1950 and 2020, the average pH of the ocean surface fell from approximately 8.15 to 8.05.
The pH value of seawater is naturally as low as 7.8 in deep ocean waters as a result of degradation of organic matter in these waters. It can be as high as 8.4 in surface waters in areas of high biological productivity.
Measurement of pH is complicated by the chemical properties of seawater, and several distinct pH scales exist in chemical oceanography. There is no universally accepted reference pH-scale for seawater and the difference between measurements based on different reference scales may be up to 0.14 units.
Chemical composition
Seawater contains more dissolved ions than all types of freshwater. However, the ratios of solutes differ dramatically. For instance, although seawater contains about 2.8 times more bicarbonate than river water, the percentage of bicarbonate in seawater as a ratio of all dissolved ions is far lower than in river water. Bicarbonate ions constitute 48% of river water solutes but only 0.14% for seawater. Differences like these are due to the varying residence times of seawater solutes; sodium and chloride have very long residence times, while calcium (vital for carbonate formation) tends to precipitate much more quickly. The most abundant dissolved ions in seawater are sodium, chloride, magnesium, sulfate and calcium. Its osmolarity is about 1000 mOsm/L.
Small amounts of other substances are found, including amino acids at concentrations of up to 2 micrograms of nitrogen atoms per liter, which are thought to have played a key role in the origin of life.
Microbial components
Research in 1957 by the Scripps Institution of Oceanography sampled water in both pelagic and neritic locations in the Pacific Ocean. Direct microscopic counts and cultures were used, the direct counts in some cases showing up to 10 000 times that obtained from cultures. These differences were attributed to the occurrence of bacteria in aggregates, selective effects of the culture media, and the presence of inactive cells. A marked reduction in bacterial culture numbers was noted below the thermocline, but not by direct microscopic observation. Large numbers of spirilli-like forms were seen by microscope but not under cultivation. The disparity in numbers obtained by the two methods is well known in this and other fields. In the 1990s, improved techniques of detection and identification of microbes by probing just small snippets of DNA, enabled researchers taking part in the Census of Marine Life to identify thousands of previously unknown microbes usually present only in small numbers. This revealed a far greater diversity than previously suspected, so that a litre of seawater may hold more than 20,000 species. Mitchell Sogin from the Marine Biological Laboratory feels that "the number of different kinds of bacteria in the oceans could eclipse five to 10 million."
Bacteria are found at all depths in the water column, as well as in the sediments, some being aerobic, others anaerobic. Most are free-swimming, but some exist as symbionts within other organisms – examples of these being bioluminescent bacteria. Cyanobacteria played an important role in the evolution of ocean processes, enabling the development of stromatolites and oxygen in the atmosphere.
Some bacteria interact with diatoms, and form a critical link in the cycling of silicon in the ocean. One anaerobic species, Thiomargarita namibiensis, plays an important part in the breakdown of hydrogen sulfide eruptions from diatomaceous sediments off the Namibian coast, and generated by high rates of phytoplankton growth in the Benguela Current upwelling zone, eventually falling to the seafloor.
Bacteria-like Archaea surprised marine microbiologists by their survival and thriving in extreme environments, such as the hydrothermal vents on the ocean floor. Alkalotolerant marine bacteria such as Pseudomonas and Vibrio spp. survive in a pH range of 7.3 to 10.6, while some species will grow only at pH 10 to 10.6. Archaea also exist in pelagic waters and may constitute as much as half the ocean's biomass, clearly playing an important part in oceanic processes. In 2000 sediments from the ocean floor revealed a species of Archaea that breaks down methane, an important greenhouse gas and a major contributor to atmospheric warming. Some bacteria break down the rocks of the sea floor, influencing seawater chemistry. Oil spills, and runoff containing human sewage and chemical pollutants have a marked effect on microbial life in the vicinity, as well as harbouring pathogens and toxins affecting all forms of marine life. The protist dinoflagellates may at certain times undergo population explosions called blooms or red tides, often after human-caused pollution. The process may produce metabolites known as biotoxins, which move along the ocean food chain, tainting higher-order animal consumers.
Pandoravirus salinus, a species of very large virus, with a genome much larger than that of any other virus species, was discovered in 2013. Like the other very large viruses Mimivirus and Megavirus, Pandoravirus infects amoebas, but its genome, containing 1.9 to 2.5 megabases of DNA, is twice as large as that of Megavirus, and it differs greatly from the other large viruses in appearance and in genome structure.
In 2013 researchers from Aberdeen University announced that they were starting a hunt for undiscovered chemicals in organisms that have evolved in deep sea trenches, hoping to find "the next generation" of antibiotics, anticipating an "antibiotic apocalypse" with a dearth of new infection-fighting drugs. The EU-funded research will start in the Atacama Trench and then move on to search trenches off New Zealand and Antarctica.
The ocean has a long history of human waste disposal on the assumption that its vast size makes it capable of absorbing and diluting all noxious material.
While this may be true on a small scale, the large amounts of sewage routinely dumped has damaged many coastal ecosystems, and rendered them life-threatening. Pathogenic viruses and bacteria occur in such waters, such as Escherichia coli, Vibrio cholerae the cause of cholera, hepatitis A, hepatitis E and polio, along with protozoans causing giardiasis and cryptosporidiosis. These pathogens are routinely present in the ballast water of large vessels, and are widely spread when the ballast is discharged.
Other parameters
The speed of sound in seawater is about 1,500 m/s (whereas the speed of sound is usually around 330 m/s in air at roughly 101.3 kPa pressure, 1 atmosphere), and varies with water temperature, salinity, and pressure. The thermal conductivity of seawater is 0.6 W/mK at 25 °C and a salinity of 35 g/kg.
The thermal conductivity decreases with increasing salinity and increases with increasing temperature.
Origin and history
The water in the sea was thought to come from the Earth's volcanoes, starting 4 billion years ago, released by degassing from molten rock. More recent work suggests much of the Earth's water may come from comets.
Scientific theories behind the origins of sea salt started with Sir Edmond Halley in 1715, who proposed that salt and other minerals were carried into the sea by rivers after rainfall washed it out of the ground. Upon reaching the ocean, these salts concentrated as more salt arrived over time (see Hydrologic cycle). Halley noted that most lakes that do not have ocean outlets (such as the Dead Sea and the Caspian Sea, see endorheic basin), have high salt content. Halley termed this process "continental weathering".
Halley's theory was partly correct. In addition, sodium leached out of the ocean floor when the ocean formed. The presence of salt's other dominant ion, chloride, results from outgassing of chloride (as hydrochloric acid) with other gases from Earth's interior via volcanos and hydrothermal vents. The sodium and chloride ions subsequently became the most abundant constituents of sea salt.
Ocean salinity has been stable for billions of years, most likely as a consequence of a chemical/tectonic system which removes as much salt as is deposited; for instance, sodium and chloride sinks include evaporite deposits, pore-water burial, and reactions with seafloor basalts.
Human impacts
Climate change, rising levels of carbon dioxide in Earth's atmosphere, excess nutrients, and pollution in many forms are altering global oceanic geochemistry. Rates of change for some aspects greatly exceed those in the historical and recent geological record. Major trends include an increasing acidity, reduced subsurface oxygen in both near-shore and pelagic waters, rising coastal nitrogen levels, and widespread increases in mercury and persistent organic pollutants. Most of these perturbations are tied either directly or indirectly to human fossil fuel combustion, fertilizer, and industrial activity. Concentrations are projected to grow in coming decades, with negative impacts on ocean biota and other marine resources.
One of the most striking features of this is ocean acidification, resulting from increased CO2 uptake of the oceans related to higher atmospheric concentration of CO2 and higher temperatures, because it severely affects coral reefs, mollusks, echinoderms and crustaceans (see coral bleaching).
Seawater is a means of transportation throughout the world. Every day plenty of ships cross the ocean to deliver goods to various locations around the world. Seawater is a tool for countries to efficiently participate in international commercial trade and transportation, but each ship exhausts emissions that can harm marine life, air quality of coastal areas. Seawater transportation is one of the fastest growing human generated greenhouse gas emissions. The emissions released from ships pose significant risks to human health in nearing areas as the oil and gas released from the operation of merchant ships decreases the air quality and causes more pollution both in the seawater and surrounding areas.
Another human use of seawater that has been considered is the use of seawater for agricultural purposes. In areas with higher regions of sand dunes, such as Israel, the use of seawater for irrigation of plants would eliminate substantial costs associated with fresh water when it is not easily accessible. Although it is not typical to use salt water as a means to grow plants as the salt gathers and ruins the surrounding soil, it has been proven to be successful in sand and gravel soils. Large-scale desalination of seawater is another factor that would contribute to the success of agriculture farming in dry, desert environments. One of the most successful plants in salt water agriculture is the halophyte. The halophyte is a salt tolerant plant whose cells are resistant to the typically detrimental effects of salt in soil. The endodermis forces a higher level of salt filtration throughout the plant as it allows for the circulation of more water through the cells. The cultivation of halophytes irrigated with salt water were used to grow animal feed for livestock; however, the animals that were fed these plants consumed more water than those that did not. Although agriculture from use of saltwater is still not recognized and used on a large scale, initial research has shown that there could be an opportunity to provide more crops in regions where agricultural farming is not usually feasible.
Human consumption
Accidentally consuming small quantities of clean seawater is not harmful, especially if the seawater is taken along with a larger quantity of fresh water. However, drinking seawater to maintain hydration is counterproductive; more water must be excreted to eliminate the salt (via urine) than the amount of water obtained from the seawater itself. In normal circumstances, it would be considered ill-advised to consume large amounts of unfiltered seawater.
The renal system actively regulates the levels of sodium and chloride in the blood within a very narrow range around 9 g/L (0.9% by mass).
In most open waters concentrations vary somewhat around typical values of about 3.5%, far higher than the body can tolerate and most beyond what the kidney can process. A point frequently overlooked in claims that the kidney can excrete NaCl in Baltic concentrations of 2% (in arguments to the contrary) is that the gut cannot absorb water at such concentrations, so that there is no benefit in drinking such water. The salinity of Baltic surface water, however, is never 2%. It is 0.9% or less, and thus never higher than that of bodily fluids. Drinking seawater temporarily increases blood's NaCl concentration. This signals the kidney to excrete sodium, but seawater's sodium concentration is above the kidney's maximum concentrating ability. Eventually the blood's sodium concentration rises to toxic levels, removing water from cells and interfering with nerve conduction, ultimately producing fatal seizure and cardiac arrhythmia.
Survival manuals consistently advise against drinking seawater. A summary of 163 life raft voyages estimated the risk of death at 39% for those who drank seawater, compared to 3% for those who did not. The effect of seawater intake on rats confirmed the negative effects of drinking seawater when dehydrated.
The temptation to drink seawater was greatest for sailors who had expended their supply of fresh water and were unable to capture enough rainwater for drinking. This frustration was described famously by a line from Samuel Taylor Coleridge's The Rime of the Ancient Mariner:
Although humans cannot survive on seawater in place of normal drinking water, some people claim that up to two cups a day, mixed with fresh water in a 2:3 ratio, produces no ill effect. The French physician Alain Bombard survived an ocean crossing in a small Zodiak rubber boat using mainly raw fish meat, which contains about 40% water (like most living tissues), as well as small amounts of seawater and other provisions harvested from the ocean. His findings were challenged, but an alternative explanation could not be given. In his 1948 book The Kon-Tiki Expedition, Thor Heyerdahl reported drinking seawater mixed with fresh in a 2:3 ratio during the 1947 expedition. A few years later, another adventurer, William Willis, claimed to have drunk two cups of seawater and one cup of fresh per day for 70 days without ill effect when he lost part of his water supply.
During the 18th century, Richard Russell advocated the medical use of this practice in the UK, and René Quinton expanded the advocation of this practice to other countries, notably France, in the 20th century. Currently, it is widely practiced in Nicaragua and other countries, supposedly taking advantage of the latest medical discoveries.
Purification
Like any other type of raw or contaminated water, seawater can be evaporated or filtered to eliminate salt, germs, and other contaminants that would otherwise prevent it from being considered potable. Most oceangoing vessels desalinate potable water from seawater using processes such as vacuum distillation or multi-stage flash distillation in an evaporator, or, more recently, reverse osmosis. These energy-intensive processes were not usually available during the Age of Sail. Larger sailing warships with large crews, such as Nelson's , were fitted with distilling apparatus in their galleys.
The natural sea salt obtained by evaporating seawater can also be collected and sold as table salt, typically sold separately owing to its unique mineral make-up compared to rock salt or other sources.
A number of regional cuisines across the world traditionally incorporate seawater directly as an ingredient, cooking other ingredients in a diluted solution of filtered seawater as a substitute for conventional dry seasonings. Proponents include world-renowned chefs Ferran Adrià and Quique Dacosta, whose home country of Spain has six different companies sourcing filtered seawater for culinary use. The water is marketed as , "the perfect salt", containing less sodium with what is considered a superior taste. A restaurant run by Joaquín Baeza sources as much as 60,000 litres a month from supplier Mediterranea
Animals such as fish, whales, sea turtles, and seabirds, such as penguins and albatrosses, have adapted to living in a high-saline habitat. For example, sea turtles and saltwater crocodiles remove excess salt from their bodies through their tear ducts.
Mineral extraction
Minerals have been extracted from seawater since ancient times. Currently the four most concentrated metals – Na, Mg, Ca and K – are commercially extracted from seawater. During 2015 in the US 63% of magnesium production came from seawater and brines. Bromine is also produced from seawater in China and Japan. Lithium extraction from seawater was tried in the 1970s, but the tests were soon abandoned. The idea of extracting uranium from seawater has been considered at least from the 1960s, but only a few grams of uranium were extracted in Japan in the late 1990s. The main issue is not one of technological feasibility but that current prices on the uranium market for uranium from other sources are about three to five times lower than the lowest price achieved by seawater extraction. Similar issues hamper the use of reprocessed uranium and are often brought forth against nuclear reprocessing and the manufacturing of MOX fuel as economically unviable.
The future of mineral and element extractions
In order for seawater mineral and element extractions to take place while taking close consideration of sustainable practices, it is necessary for monitored management systems to be put in place. This requires management of ocean areas and their conditions, environmental planning, structured guidelines to ensure that extractions are controlled, regular assessments of the condition of the sea post-extraction, and constant monitoring. The use of technology, such as underwater drones, can facilitate sustainable extractions. The use of low-carbon infrastructure would also allow for more sustainable extraction processes while reducing the carbon footprint from mineral extractions.
Another practice that is being considered closely is the process of desalination in order to achieve a more sustainable water supply from seawater. Although desalination also comes with environmental concerns, such as costs and resources, researchers are working closely to determine more sustainable practices, such as creating more productive water plants that can deal with larger water supplies in areas where these plans weren't always available. Although seawater extractions can benefit society greatly, it is crucial to consider the environmental impact and to ensure that all extractions are conducted in a way that acknowledges and considers the associated risks to the sustainability of seawater ecosystems.
Standard
ASTM International has an international standard for artificial seawater: ASTM D1141-98 (Original Standard ASTM D1141-52). It is used in many research testing labs as a reproducible solution for seawater such as tests on corrosion, oil contamination, and detergency evaluation.
Ecosystems
The minerals found in seawater can also play an important role in the ocean and its ecosystem's food cycle. For example, the Southern Ocean contributes greatly to the environmental carbon cycle. Given that this body of water does not contain high levels of iron, the deficiency impacts the marine life living in its waters. As a result, this ocean is not able to produce as much phytoplankton which hinders the first source of the marine food chain. One of the main types of phytoplankton are diatoms which is the primary food source of Antarctic krill. As the cycle continues, various larger sea animals feed off of Antarctic krill, but since there is a shortage of iron from the initial phytoplankton/diatoms, then these larger species also lack iron. The larger sea animals include Baleen Whales such as the Blue Whale and Fin Whale. These whales not only rely on iron for a balance of minerals within their diet, but it also impacts the amount of iron that is regenerated back into the ocean. The whale's excretions also contain the absorbed iron which would allow iron to be reinserted into the ocean’s ecosystem. Overall, one mineral deficiency such as iron in the Southern Ocean can spark a significant chain of disturbances within the marine ecosystems which demonstrates the important role that seawater plays in the food chain.
Upon further analysis of the dynamic relationship between diatoms, krill, and baleen whales, fecal samples of baleen whales were examined in Antarctic seawater. The findings included that iron concentrations were 10 million times higher than those found in Antarctic seawater, and krill was found consistently throughout their feces which is an indicator that krill is in whale diets. Antarctic krill had an average iron level of 174.3mg/kg dry weight, but the iron in the krill varied from 12 to 174 mg/kg dry weight. The average iron concentration of the muscular tissue of blue whales and fin whales was 173 mg/kg dry weight, which demonstrates that the large marine mammals are important to marine ecosystems such as they are to the Southern Ocean. In fact, to have more whales in the ocean could heighten the amount of iron in seawater through their excretions which would promote a better ecosystem.
Krill and baleen whales act as large iron reservoirs in seawater in the Southern Ocean. Krill can retain up to 24% of iron found on surface waters within its range.The process of krill feeding on diatoms releases iron into seawater, highlighting them as an important part of the ocean's iron cycle. The advantageous relationship between krill and baleen whales increases the amount of iron that can be recycled and stored in seawater. A positive feedback loop is created, increasing the overall productivity of marine life in the Southern Ocean.
Organisms of all sizes play a significant role in the balance of marine ecosystems with both the largest and smallest inhabitants contributing equally to recycling nutrients in seawater. Prioritizing the recovery of whale populations because they boost the overall productivity in marine ecosystems as well as increasing iron levels in seawater would allow for a balanced and productive system for the ocean. However, a more in depth study is required to understand the benefits of whale feces as a fertilizer and to provide further insight in iron recycling in the Southern Ocean. Projects on the management of ecosystems and conservation are vital for advancing knowledge of marine ecology.
Environmental impact and sustainability
Like any mineral extraction practices, there are environmental advantages and disadvantages. Cobalt and Lithium are two key metals that can be used for aiding with more environmentally friendly technologies above ground, such as powering batteries that energize electric vehicles or creating wind power. An environmentally friendly approach to mining that allows for more sustainability would be to extract these metals from the seafloor. Lithium mining from the seafloor at mass quantities could provide a substantial amount of renewable metals to promote more environmentally friendly practices in society to reduce humans' carbon footprint. Lithium mining from the seafloor could be successful, but its success would be dependent on more productive recycling practices above ground.
There are also risks that come with extracting from the seafloor. Many biodiverse species have long lifespans on the seafloor, which means that their reproduction takes more time. Similarly to fish harvesting from the seafloor, the extraction of minerals in large amounts, too quickly, without proper protocols, can result in a disruption of the underwater ecosystems. Contrarily, this would have the opposite effect and prevent mineral extractions from being a long-term sustainable practice, and would result in a shortage of required metals. Any seawater mineral extractions also risk disrupting the habitat of the underwater life that is dependent on the uninterrupted ecosystem within their environment as disturbances can have significant disturbances on animal communities.
| Physical sciences | Oceanography | Earth science |
255245 | https://en.wikipedia.org/wiki/Fubini%27s%20theorem | Fubini's theorem | In mathematical analysis, Fubini's theorem characterizes the conditions under which it is possible to compute a double integral by using an iterated integral. It was introduced by Guido Fubini in 1907. The theorem states that if a function is Lebesgue integrable on a rectangle , then one can evaluate the double integral as an iterated integral:
This formula is generally not true for the Riemann integral, but it is true if the function is continuous on the rectangle. In multivariable calculus, this weaker result is sometimes also called Fubini's theorem, although it was already known by Leonhard Euler.
Tonelli's theorem, introduced by Leonida Tonelli in 1909, is similar but is applied to a non-negative measurable function rather than to an integrable function over its domain. The Fubini and Tonelli theorems are usually combined and form the Fubini-Tonelli theorem, which gives the conditions under which it is possible to switch the order of integration in an iterated integral.
A related theorem is often called Fubini's theorem for infinite series, although it is due to Alfred Pringsheim. It states that if is a double-indexed sequence of real numbers, and if is absolutely convergent, then
Although Fubini's theorem for infinite series is a special case of the more general Fubini's theorem, it is not necessarily appropriate to characterize the former as being proven by the latter because the properties of measures needed to prove Fubini's theorem proper, in particular subadditivity of measure, may be proven using Fubini's theorem for infinite series.
History
A special case of Fubini's theorem for continuous functions on the product of closed, bounded subsets of real vector spaces was known to Leonhard Euler in the 18th century. In 1904, Henri Lebesgue extended this result to bounded measurable functions on a product of intervals. Levi conjectured that the theorem could be extended to functions that are integrable rather than bounded and this was proven by Fubini in 1907. In 1909, Leonida Tonelli gave a variation of the Fubini theorem that applies to non-negative functions rather than integrable functions.
Product measures
If and are measure spaces, there are several natural ways to define a product measure on the product .
In the sense of category theory, measurable sets in the product of measure spaces are the elements of the σ-algebra generated by the products , where is measurable in and is measurable in .
A measure μ on X × Y is called a product measure if μ(A × B) = μ1(A)μ2(B) for measurable subsets A ⊂ X and B ⊂ Y and measures μ1 on X and μ2 on Y. In general, there may be many different product measures on X × Y. Fubini's theorem and Tonelli's theorem both require technical conditions to avoid this complication; the most common approach is to assume that all measure spaces are σ-finite, in which case there is a unique product measure on X×Y. There is always a unique maximal product measure on X × Y, where the measure of a measurable set is the inf of the measures of sets containing it that are countable unions of products of measurable sets. The maximal product measure can be constructed by applying Carathéodory's extension theorem to the additive function μ such that μ(A × B) = μ1(A)μ2(B) on the ring of sets generated by products of measurable sets. (Carathéodory's extension theorem gives a measure on a measure space that in general contains more measurable sets than the measure space X × Y, so strictly speaking, the measure should be restricted to the σ-algebra generated by the products A × B of measurable subsets of X and Y.)
The product of two complete measure spaces is not usually complete. For example, the product of the Lebesgue measure on the unit interval I with itself is not the Lebesgue measure on the square I × I. There is a variation of Fubini's theorem for complete measures, which uses the completion of the product of measures rather than the uncompleted product.
For integrable functions
Suppose X and Y are σ-finite measure spaces and suppose that X × Y is given the product measure (which is unique as X and Y are σ-finite). Fubini's theorem states that if f is X × Y integrable, meaning that f is a measurable function and
then
The first two integrals are iterated integrals with respect to two measures, respectively, and the third is an integral with respect to the product measure. The partial integrals and need not be defined everywhere, but this does not matter as the points where they are not defined form a set of measure 0.
If the above integral of the absolute value is not finite, then the two iterated integrals may have different values. See below for an illustration of this possibility.
The condition that X and Y are σ-finite is usually harmless because almost all measure spaces for which one wishes to use Fubini's theorem are σ-finite.
Fubini's theorem has some rather technical extensions to the case when X and Y are not assumed to be σ-finite . The main extra complication in this case is that there may be more than one product measure on X×Y. Fubini's theorem continues to hold for the maximal product measure but can fail for other product measures. For example, there is a product measure and a non-negative measurable function f for which the double integral of |f| is zero but the two iterated integrals have different values; see the section on counterexamples below for an example of this. Tonelli's theorem and the Fubini–Tonelli theorem (stated below) can fail on non σ-finite spaces, even for the maximal product measure.
Tonelli's theorem for non-negative measurable functions
, named after Leonida Tonelli, is a successor of Fubini's theorem. The conclusion of Tonelli's theorem is identical to that of Fubini's theorem, but the assumption that has a finite integral is replaced by the assumption that is a non-negative measurable function.
Tonelli's theorem states that if and are σ-finite measure spaces, while is non-negative measurable function, then
A special case of Tonelli's theorem is in the interchange of the summations, as in , where are non-negative for all x and y. The crux of the theorem is that the interchange of order of summation holds even if the series diverges. In effect, the only way a change in order of summation can change the sum is when there exist some subsequences that diverge to and others diverging to . With all elements non-negative, this does not happen in the stated example.
Without the condition that the measure spaces are σ-finite, all three of these integrals can have different values. Some authors give generalizations of Tonelli's theorem to some measure spaces that are not σ-finite, but these generalizations often add conditions that immediately reduce the problem to the σ-finite case. For example, one could take the σ-algebra on A × B to be that generated by the product of subsets of finite measure, rather than that generated by all products of measurable subsets, though this has the undesirable consequence that the projections from the product to its factors A and B are not measurable. Another way is to add the condition that the support of f is contained in a countable union of products of sets of finite measures. gives some rather technical extensions of Tonelli's theorem to some non σ-finite spaces. None of these generalizations have found any significant applications outside of abstract measure theory, largely because almost all measure spaces of practical interest are σ-finite.
Fubini–Tonelli theorem
Combining Fubini's theorem with Tonelli's theorem gives the Fubini–Tonelli theorem. Often just called Fubini's theorem, it states that if and are σ-finite measure spaces, and if is a measurable function, then
Furthermore, if any one of these integrals is finite, then
The absolute value of in the conditions above can be replaced by either the positive or the negative part of ; these forms include Tonelli's theorem as a special case as the negative part of a non-negative function is zero and so has finite integral. Informally, all these conditions say that the double integral of is well defined, though possibly infinite.
The advantage of the Fubini–Tonelli over Fubini's theorem is that the repeated integrals of may be easier to study than the double integral. As in Fubini's theorem, the single integrals may fail to be defined on a measure 0 set.
For complete measures
The versions of Fubini's and Tonelli's theorems above do not apply to integration on the product of the real line with itself with Lebesgue measure. The problem is that Lebesgue measure on is not the product of Lebesgue measure on with itself, but rather the completion of this: a product of two complete measure spaces and is not in general complete. For this reason, one sometimes uses versions of Fubini's theorem for complete measures: roughly speaking, one replaces all measures with their completions. The various versions of Fubini's theorem are similar to the versions above, with the following minor differences:
Instead of taking a product of two measure spaces, one takes the completion of the product.
If is measurable on the completion of then its restrictions to vertical or horizontal lines may be non-measurable for a measure zero subset of lines, so one has to allow for the possibility that the vertical or horizontal integrals are undefined on a set of measure 0 because they involve integrating non-measurable functions. This makes little difference, because they can already be undefined due to the functions not being integrable.
One generally also assumes that the measures on and are complete, otherwise the two partial integrals along vertical or horizontal lines may be well-defined but not measurable. For example, if is the characteristic function of a product of a measurable set and a non-measurable set contained in a measure 0 set then its single integral is well defined everywhere but non-measurable.
Proofs
Proofs of the Fubini and Tonelli theorems are necessarily somewhat technical, as they have to use a hypothesis related to σ-finiteness. Most proofs involve building up to the full theorems by proving them for increasingly complicated functions, with the steps as follows.
Use the fact that the measure on the product is multiplicative for rectangles to prove the theorems for the characteristic functions of rectangles.
Use the condition that the spaces are σ-finite (or some related condition) to prove the theorem for the characteristic functions of measurable sets. This also covers the case of simple measurable functions (measurable functions taking only a finite number of values).
Use the condition that the functions are measurable to prove the theorems for positive measurable functions by approximating them by simple measurable functions. This proves Tonelli's theorem.
Use the condition that the functions are integrable to write them as the difference of two positive integrable functions and apply Tonelli's theorem to each of these. This proves Fubini's theorem.
Riemann integrals
For Riemann integrals, Fubini's theorem is proven by refining the partitions along the x-axis and y-axis as to create a joint partition of the form , which is a partition over . This is used to show that the double integrals of either order are equal to the integral over .
Counterexamples
The following examples show how Fubini's theorem and Tonelli's theorem can fail if any of their hypotheses are omitted.
Failure of Tonelli's theorem for non σ-finite spaces
Suppose that X is the unit interval with the Lebesgue measurable sets and Lebesgue measure, and Y is the unit interval with all the subsets measurable and the counting measure, so that Y is not σ-finite. If f is the characteristic function of the diagonal of X×Y, then integrating f along X gives the 0 function on Y, but integrating f along Y gives the function 1 on X. So, the two iterated integrals are different. This shows that Tonelli's theorem can fail for spaces that are not σ-finite no matter which product measure is chosen. The measures are both decomposable, showing that Tonelli's theorem fails for decomposable measures (which are slightly more general than σ-finite measures).
Failure of Fubini's theorem for non-maximal product measures
Fubini's theorem holds for spaces even if they are not assumed to be σ-finite provided one uses the maximal product measure. In the example above, for the maximal product measure, the diagonal has infinite measure so the double integral of |f| is infinite, and Fubini's theorem holds vacuously.
However, if we give X×Y the product measure such that the measure of a set is the sum of the Lebesgue measures of its horizontal sections, then the double integral of |f| is zero, but the two iterated integrals still have different values. This gives an example of a product measure where Fubini's theorem fails.
This gives an example of two different product measures on the same product of two measure spaces. For products of two σ-finite measure spaces, there is only one product measure.
Failure of Tonelli's theorem for non-measurable functions
Suppose that X is the first uncountable ordinal, with the finite measure where the measurable sets are either countable (with measure 0) or the sets of countable complement (with measure 1). The (non-measurable) subset E of X×X given by pairs (x , y) with x<y is countable on every horizontal line and has countable complement on every vertical line. If f is the characteristic function of E then the two iterated integrals of f are defined and have different values 1 and 0. The function f is not measurable. This shows that Tonelli's theorem can fail for non-measurable functions.
Failure of Fubini's theorem for non-measurable functions
A variation of the example above shows that Fubini's theorem can fail for non-measurable functions even if |f| is integrable and both repeated integrals are well defined: if we take f to be 1 on E and –1 on the complement of E, then |f| is integrable on the product with integral 1, and both repeated integrals are well defined, but have different values 1 and –1.
Assuming the continuum hypothesis, one can identify X with the unit interval I, so there is a bounded non-negative function on I×I whose two iterated integrals (using Lebesgue measure) are both defined but unequal. This example was found by .
The stronger versions of Fubini's theorem on a product of two unit intervals with Lebesgue measure, where the function is no longer assumed to be measurable but merely that the two iterated integrals are well defined and exist, are independent of the standard Zermelo–Fraenkel axioms of set theory. The continuum hypothesis and Martin's axiom both imply that there exists a function on the unit square whose iterated integrals are not equal, while showed that it is consistent with ZFC that a strong Fubini-type theorem for [0,1] does hold, and whenever the two iterated integrals exist they are equal. See List of statements undecidable in ZFC.
Failure of Fubini's theorem for non-integrable functions
Fubini's theorem tells us that (for measurable functions on a product of σ-finite measure spaces) if the integral of the absolute value is finite, then the order of integration does not matter; if we integrate first with respect to x and then with respect to y, we get the same result as if we integrate first with respect to y and then with respect to x. The assumption that the integral of the absolute value is finite is "Lebesgue integrability", and without it the two repeated integrals can have different values.
A simple example to show that the repeated integrals can be different in general is to take the two measure spaces to be the positive integers, and to take the function f(x,y) to be 1 if x = y, −1 if x = y + 1, and 0 otherwise. Then the two repeated integrals have different values 0 and 1.
Another example is as follows for the function
The iterated integrals
and
have different values. The corresponding double integral does not converge absolutely (in other words the integral of the absolute value is not finite):
Fubini's theorem in multiplications of integrals
Product of two integrals
For the product of two integrals with lower limit zero and a common upper limit we have the following formula:
{| class = "wikitable"
|
|}
Proof
Let and are primitive functions of the functions and respectively, which pass through the origin:
Therefore, we have
By the product rule, the derivative of the right-hand side is
and by integrating we have:
Thus, the equation from the beginning we get:
Now, we introduce a second integration parameter for the description of the antiderivatives and :
By insertion, a double integral appears:
Functions that are foreign to the concerned integration parameter can be imported into the inner function as a factor:
In the next step, the sum rule is applied to the integrals:
And finally, we use the Fubini theorem
Calculation examples
Arcsine Integral
The Arcsine Integral, also called the Inverse Sine Integral, is a function that cannot be represented by elementary functions. However, the Arcsine Integral does have some elementary function values. These values can be determined by integrating the derivative of the arcsine integral, which is the quotient of the Arcsine divided by the Identity Function - the Cardinalized Arcsine. The Arcsine Integral is exactly the original antiderivative of the Cardinalized Arcsine. To integrate this function, Fubini's theorem serves as a key, which unlocks the integral by exchanging the order of the integration parameters. When applied correctly, Fubini's theorem leads directly to an antiderivative function that can be integrated in an elementary way, which is shown in cyan in the following equation chain:
Dirichlet Eta Function
The Dirichlet series defines the Dirichlet Eta Function as follows:
The value η(2) is equal to π²/12 and this can be proven with Fubini's theorem in this way:
The integral of the product of the Reciprocal Function and the Natural Logarithm of the Successor Function is a Polylogarithmic Integral and it cannot be represented by elementary function expressions. Fubini's theorem again unlocks this integral in a combinatorial way. This works by carrying out double integration on the basis of Fubini's theorem used on an additive combination of fractionally rational functions with fractions of linear and square denominators:
This way of working out the integral of the Cardinalized natural logarithm of the successor function was discovered by James Harper and it is described in his work Another simple proof of 1 + 1/2² + 1/3² + ... = π²/6 accurately.
The original antiderivative, shown here in cyan, leads directly to the value of η(2):
Integrals of Complete Elliptic Integrals
The improper integral of the Complete Elliptic Integral of first kind K takes the value of twice the Catalan constant accurately. The antiderivative of that K-integral belongs to the so-called Elliptic Polylogarithms. The Catalan constant can only be obtained via the Arctangent Integral, which results from the application of Fubini's theorem:
This time, the expression now in royal cyan color tone is not elementary, but it leads directly to the equally non-elementary value of the "Catalan constant" using the Arctangent Integral, also called Inverse Tangent Integral.
The same procedure also works for the Complete Elliptic Integral of the second kind E in the following way:
Double execution for the Exponential Integral Function
The Euler-Mascheroni constant emerges as the Improper Integral from zero to infinity at the integration on the product of negative Natural Logarithm and the Exponential reciprocal. But it is also the improper integral within the same limits on the Cardinalized Difference of the reciprocal of the Successor Function and the Exponential Reciprocal:
The concord of these two integrals can be shown by successively executing the Fubini's Theorem twice and by leading this double execution of that theorem over the identity to an integral of the complementary Exponential Integral Function:
This is how the complementary integral exponential function is defined:
This is the derivative of that function:
First implementation of Fubini's theorem:
This integral from a construction of the integral exponential function leads to the integral from the negative Natural Logarithm and the Exponential Reciprocal:
Second implementation of Fubini's theorem:
The previously described integral from the described cardinalized difference leads to the previously mentioned integral from the Exponential Integral function:
In principle, products from exponential functions and fractionally rational functions can be integrated like this:
In this way it is shown accurately by using the Fubini's Theorem twice that these integrals are indeed identical to each other.
Gauss curve integral
Now this formula for the squaring of an integral is set up:
{| class = "wikitable"
|
|}
This chain of equations can then be generated accordingly:
For the integral of the Gauss curve this value can be generated:
Dilogarithm of one
Now another formula for the squaring of an integral is set up again:
{| class = "wikitable"
|
|}
So this chain of equations applies as a new example:
For the Dilogarithm of one this value appears:
In this way the Basel problem can be solved.
Legendre's relation
In this next example, the more generalized form of the equation is used again as a mold:
{| class = "wikitable"
|
|}
The following integrals can be computed by using the incomplete Elliptic Integrals of the first and second kind as antiderivatives and these integrals have values that can be represented with Complete Elliptic Integrals:
Inserting these two integrals into the above form gives:
For the Lemniscatic special case of Legendre's relation, this result emerges:
| Mathematics | Multivariable and vector calculus | null |
255297 | https://en.wikipedia.org/wiki/Coronal%20mass%20ejection | Coronal mass ejection | A coronal mass ejection (CME) is a significant ejection of plasma mass from the Sun's corona into the heliosphere. CMEs are often associated with solar flares and other forms of solar activity, but a broadly accepted theoretical understanding of these relationships has not been established.
If a CME enters interplanetary space, it is referred to as an interplanetary coronal mass ejection (ICME). ICMEs are capable of reaching and colliding with Earth's magnetosphere, where they can cause geomagnetic storms, aurorae, and in rare cases damage to electrical power grids. The largest recorded geomagnetic perturbation, resulting presumably from a CME, was the solar storm of 1859. Also known as the Carrington Event, it disabled parts of the newly created United States telegraph network, starting fires and electrically shocking some telegraph operators.
Near solar maxima, the Sun produces about three CMEs every day, whereas near solar minima, there is about one CME every five days.
Physical description
CMEs release large quantities of matter from the Sun's atmosphere into the solar wind and interplanetary space. The ejected matter is a plasma consisting primarily of electrons and protons embedded within its magnetic field. This magnetic field is commonly in the form of a flux rope, a helical magnetic field with changing pitch angles.
The average mass ejected is . However, the estimated mass values for CMEs are only lower limits, because coronagraph measurements provide only two-dimensional data.
CMEs erupt from strongly twisted or sheared, large-scale magnetic field structures in the corona that are kept in equilibrium by overlying magnetic fields.
Origin
CMEs erupt from the lower corona, where processes associated with the local magnetic field dominate over other processes. As a result, the coronal magnetic field plays an important role in the formation and eruption of CMEs. Pre-eruption structures originate from magnetic fields that are initially generated in the Sun's interior by the solar dynamo. These magnetic fields rise to the Sun's surface—the photosphere—where they may form localized areas of highly concentrated magnetic flux and expand into the lower solar atmosphere forming active regions. At the photosphere, active region magnetic flux is often distributed in a dipole configuration, that is, with two adjacent areas of opposite magnetic polarity across which the magnetic field arches. Over time, the concentrated magnetic flux cancels and disperses across the Sun's surface, merging with the remnants of past active regions to become a part of the quiet Sun. Pre-eruption CME structures can be present at different stages of the growth and decay of these regions, but they always lie above polarity inversion lines (PIL), or boundaries across which the sign of the vertical component of the magnetic field reverses. PILs may exist in, around, and between active regions or form in the quiet Sun between active region remnants. More complex magnetic flux configurations, such as quadrupolar fields, can also host pre-eruption structures.
In order for pre-eruption CME structures to develop, large amounts of energy must be stored and be readily available to be released. As a result of the dominance of magnetic field processes in the lower corona, the majority of the energy must be stored as magnetic energy. The magnetic energy that is freely available to be released from a pre-eruption structure, referred to as the magnetic free energy or nonpotential energy of the structure, is the excess magnetic energy stored by the structure's magnetic configuration relative to that stored by the lowest-energy magnetic configuration the underlying photospheric magnetic flux distribution could theoretically take, a potential field state. Emerging magnetic flux and photospheric motions continuously shifting the footpoints of a structure can result in magnetic free energy building up in the coronal magnetic field as twist or shear. Some pre-eruption structures, referred to as , take on an S or reverse-S shape as shear accumulates. This has been observed in active region coronal loops and filaments with forward-S sigmoids more common in the southern hemisphere and reverse-S sigmoids more common in the northern hemisphere.
Magnetic flux ropes—twisted and sheared magnetic flux tubes that can carry electric current and magnetic free energy—are an integral part of the post-eruption CME structure; however, whether flux ropes are always present in the pre-eruption structure or whether they are created during the eruption from a strongly sheared core field (see ) is subject to ongoing debate.
Some pre-eruption structures have been observed to support prominences, also known as filaments, composed of much cooler material than the surrounding coronal plasma. Prominences are embedded in magnetic field structures referred to as prominence cavities, or filament channels, which may constitute part of a pre-eruption structure (see ).
Early evolution
The early evolution of a CME involves its initiation from a pre-eruption structure in the corona and the acceleration that follows. The processes involved in the early evolution of CMEs are poorly understood due to a lack of observational evidence.
Initiation
CME initiation occurs when a pre-eruption structure in an equilibrium state enters a nonequilibrium or metastable state where energy can be released to drive an eruption. The specific processes involved in CME initiation are debated, and various models have been proposed to explain this phenomenon based on physical speculation. Furthermore, different CMEs may be initiated by different processes.
It is unknown whether a magnetic flux rope exists prior to initiation, in which case either ideal or non-ideal magnetohydrodynamic (MHD) processes drive the expulsion of this flux rope, or whether a flux rope is created during the eruption by non-ideal process. Under ideal MHD, initiation may involve ideal instabilities or catastrophic loss of equilibrium along an existing flux rope:
The kink instability occurs when a magnetic flux rope is twisted to a critical point, whereupon the flux rope is unstable to further twisting.
The torus instability occurs when the magnetic field strength of an arcade overlying a flux rope decreases rapidly with height. When this decrease is sufficiently rapid, the flux rope is unstable to further expansion.
The catastrophe model involves a catastrophic loss of equilibrium.
Under non-ideal MHD, initiations mechanisms may involve resistive instabilities or magnetic reconnection:
Tether-cutting, or flux cancellation, occurs in strongly sheared arcades when nearly antiparallel field lines on opposite sides of the arcade form a current sheet and reconnect with each other. This can form a helical flux rope or cause a flux rope already present to grow and its axis to rise.
The magnetic breakout model consists of an initial quadrupolar magnetic topology with a null point above a central flux system. As shearing motions cause this central flux system to rise, the null point forms a current sheet and the core flux system reconnects with the overlying magnetic field.
Initial acceleration
Following initiation, CMEs are subject to different forces that either assist or inhibit their rise through the lower corona. Downward magnetic tension force exerted by the strapping magnetic field as it is stretched and, to a lesser extent, the gravitational pull of the Sun oppose movement of the core CME structure. In order for sufficient acceleration to be provided, past models have involved magnetic reconnection below the core field or an ideal MHD process, such as instability or acceleration from the solar wind.
In the majority of CME events, acceleration is provided by magnetic reconnection cutting the strapping field's connections to the photosphere from below the core and outflow from this reconnection pushing the core upward. When the initial rise occurs, the opposite sides of the strapping field below the rising core are oriented nearly antiparallel to one another and are brought together to form a current sheet above the PIL. Fast magnetic reconnection can be excited along the current sheet by microscopic instabilities, resulting in the rapid release of stored magnetic energy as kinetic, thermal, and nonthermal energy. The restructuring of the magnetic field cuts the strapping field's connections to the photosphere thereby decreasing the downward magnetic tension force while the upward reconnection outflow pushes the CME structure upwards. A positive feedback loop results as the core is pushed upwards and the sides of the strapping field are brought in closer and closer contact to produce additional magnetic reconnection and rise. While upward reconnection outflow accelerates the core, simultaneous downward outflow is sometimes responsible for other phenomena associated with CMEs (see ).
In cases where significant magnetic reconnection does not occur, ideal MHD instabilities or the dragging force from the solar wind can theoretically accelerate a CME. However, if sufficient acceleration is not provided, the CME structure may fall back in what is referred to as a failed or confined eruption.
Coronal signatures
The early evolution of CMEs is frequently associated with other solar phenomena observed in the low corona, such as eruptive prominences and solar flares. CMEs that have no observed signatures are sometimes referred to as stealth CMEs.
Prominences embedded in some CME pre-eruption structures may erupt with the CME as eruptive prominences. Eruptive prominences are associated with at least 70% of all CMEs and are often embedded within the bases of CME flux ropes. When observed in white-light coronagraphs, the eruptive prominence material, if present, corresponds to the observed bright core of dense material.
When magnetic reconnection is excited along a current sheet of a rising CME core structure, the downward reconnection outflows can collide with loops below to form a cusp-shaped, two-ribbon solar flare.
CME eruptions can also produce EUV waves, also known as EIT waves after the Extreme ultraviolet Imaging Telescope or as Moreton waves when observed in the chromosphere, which are fast-mode MHD wave fronts that emanate from the site of the CME.
A coronal dimming is a localized decrease in extreme ultraviolet and soft X-ray emissions in the lower corona. When associated with a CME, coronal dimmings are thought to occur predominantly due to a decrease in plasma density caused by mass outflows during the expansion of the associated CME. They often occur either in pairs located within regions of opposite magnetic polarity, a core dimming, or in a more widespread area, a secondary dimming. Core dimmings are interpreted as the footpoint locations of the erupting flux rope; secondary dimmings are interpreted as the result of the expansion of the overall CME structure and are generally more diffuse and shallow. Coronal dimming was first reported in 1974, and, due to their appearance resembling that of coronal holes, they were sometimes referred to as transient coronal holes.
Propagation
Observations of CMEs are typically through white-light coronagraphs which measure the Thomson scattering of sunlight off of free electrons within the CME plasma. An observed CME may have any or all of three distinctive features: a bright core, a dark surrounding cavity, and a bright leading edge. The bright core is usually interpreted as a prominence embedded in the CME (see ) with the leading edge as an area of compressed plasma ahead of the CME flux rope. However, some CMEs exhibit more complex geometry.
From white-light coronagraph observations, CMEs have been measured to reach speeds in the plane-of-sky ranging from with an average speed of . Observations of CME speeds indicate that CMEs tend to accelerate or decelerate until they reach the speed of the solar wind ().
When observed in interplanetary space at distances greater than about away from the Sun, CMEs are sometimes referred to as interplanetary CMEs, or ICMEs.
Interactions in the heliosphere
As CMEs propagate through the heliosphere, they may interact with the surrounding solar wind, the interplanetary magnetic field, and other CMEs and celestial bodies.
CMEs can experience aerodynamic drag forces that act to bring them to kinematic equilibrium with the solar wind. As a consequence, CMEs faster than the solar wind tend to slow down whereas CMEs slower than the solar wind tend to speed up until their speed matches that of the solar wind.
How CMEs evolve as they propagate through the heliosphere is poorly understood. Models of their evolution have been proposed that are accurate to some CMEs but not others. Aerodynamic drag and snowplow models assume that ICME evolution is governed by its interactions with the solar wind. Aerodynamic drag alone may be able to account for the evolution of some ICMEs, but not all of them.
CMEs typically reach Earth one to five days after leaving the Sun. The strongest deceleration or acceleration occurs close to the Sun, but it can continue even beyond Earth orbit (1 AU), which was observed using measurements at Mars and by the Ulysses spacecraft. ICMEs faster than about eventually drive a shock wave. This happens when the speed of the ICME in the frame of reference moving with the solar wind is faster than the local fast magnetosonic speed. Such shocks have been observed directly by coronagraphs in the corona, and are related to type II radio bursts. They are thought to form sometimes as low as (solar radii). They are also closely linked with the acceleration of solar energetic particles.
As ICMEs propagate through the interplanetary medium, they may collide with other ICMEs in what is referred to as CME–CME interaction or CME cannibalism.
During such CME-CME interactions, the first CME may clear the way for the second one and/or when two CMEs collide it can lead to more severe impacts on Earth. Historical records show that the most extreme space weather events involved multiple successive CMEs. For example, the famous Carrington event in 1859 had several eruptions and caused auroras to be visible at low latitudes for four nights. Similarly, the solar storm of September 1770 lasted for nearly nine days, and caused repeated low-latitude auroras. The interaction between two moderate CMEs between the Sun and Earth can create extreme conditions on Earth. Recent studies have shown that the magnetic structure in particular its chirality/handedness, of a CME can greatly affect how it interacts with Earth's magnetic field. This interaction can result in the conservation or loss of magnetic flux, particularly its southward magnetic field component, through magnetic reconnection with the interplanetary magnetic field.
Morphology
In the solar wind, CMEs manifest as magnetic clouds. They have been defined as regions of enhanced magnetic field strength, smooth rotation of the magnetic field vector, and low proton temperature. The association between CMEs and magnetic clouds was made by Burlaga et al. in 1982 when a magnetic cloud was observed by Helios-1 two days after being observed by SMM. However, because observations near Earth are usually done by a single spacecraft, many CMEs are not seen as being associated with magnetic clouds. The typical structure observed for a fast CME by a satellite such as ACE is a fast-mode shock wave followed by a dense (and hot) sheath of plasma (the downstream region of the shock) and a magnetic cloud.
Other signatures of magnetic clouds are now used in addition to the one described above: among other, bidirectional superthermal electrons, unusual charge state or abundance of iron, helium, carbon, and/or oxygen.
The typical time for a magnetic cloud to move past a satellite at the L1 point is 1 day corresponding to a radius of 0.15 AU with a typical speed of and magnetic field strength of 20 nT.
Solar cycle
The frequency of ejections depends on the phase of the solar cycle: from about 0.2 per day near the solar minimum to 3.5 per day near the solar maximum. However, the peak CME occurrence rate is often 6–12 months after sunspot number reaches its maximum.
Impact on Earth
Only a very small fraction of CMEs are directed toward, and reach, the Earth. A CME arriving at Earth results in a shock wave causing a geomagnetic storm that may disrupt Earth's magnetosphere, compressing it on the day side and extending the night-side magnetic tail. When the magnetosphere reconnects on the nightside, it releases power on the order of terawatts directed back toward Earth's upper atmosphere. This can result in events such as the March 1989 geomagnetic storm.
CMEs, along with solar flares, can disrupt radio transmissions and cause damage to satellites and electrical transmission line facilities, resulting in potentially massive and long-lasting power outages.
Shocks in the upper corona driven by CMEs can also accelerate solar energetic particles toward the Earth resulting in gradual solar particle events. Interactions between these energetic particles and the Earth can cause an increase in the number of free electrons in the ionosphere, especially in the high-latitude polar regions, enhancing radio wave absorption, especially within the D-region of the ionosphere, leading to polar cap absorption events.
The interaction of CMEs with the Earth's magnetosphere leads to dramatic changes in the outer radiation belt, with either a decrease or an increase of relativistic particle fluxes by orders of magnitude. The changes in radiation belt particle fluxes are caused by acceleration, scattering and radial diffusion of relativistic electrons, due to the interactions with various plasma waves.
Halo coronal mass ejections
A halo coronal mass ejection is a CME which appears in white-light coronagraph observations as an expanding ring completely surrounding the occulting disk of the coronagraph. Halo CMEs are interpreted as CMEs directed toward or away from the observing coronagraph. When the expanding ring does not completely surround the occulting disk, but has an angular width of more than 120 degrees around the disk, the CME is referred to as a partial halo coronal mass ejection. Partial and full halo CMEs have been found to make up about 10% of all CMEs with about 4% of all CMEs being full halo CMEs. Frontside, or Earth-direct, halo CMEs are often associated with Earth-impacting CMEs; however, not all frontside halo CMEs impact Earth.
Future risk
In 2019, researchers used an alternative method (Weibull distribution) and estimated the chance of Earth being hit by a Carrington-class storm in the next decade to be between 0.46% and 1.88%.
History
First traces
CMEs have been observed indirectly for thousands of years via aurora. Other indirect observations that predated the discovery of CMEs were through measurements of geomagnetic perturbations, radioheliograph measurements of solar radio bursts, and in-situ measurements of interplanetary shocks.
The largest recorded geomagnetic perturbation, resulting presumably from a CME, coincided with the first-observed solar flare on 1 September 1859. The resulting solar storm of 1859 is referred to as the Carrington Event. The flare and the associated sunspots were visible to the naked eye, and the flare was independently observed by English astronomers R. C. Carrington and R. Hodgson. At around the same time as the flare, a magnetometer at Kew Gardens recorded what would become known as a magnetic crochet, a magnetic field detected by ground-based magnetometers induced by a perturbation of Earth's ionosphere by ionizing soft X-rays. This could not easily be understood at the time because it predated the discovery of X-rays in 1895 and the recognition of the ionosphere in 1902.
About 18 hours after the flare, further geomagnetic perturbations were recorded by multiple magnetometers as a part of a geomagnetic storm. The storm disabled parts of the recently created US telegraph network, starting fires and shocking some telegraph operators.
First optical observations
The first optical observation of a CME was made on 14 December 1971 using the coronagraph of Orbiting Solar Observatory 7 (OSO-7). It was first described by R. Tousey of the Naval Research Laboratory in a research paper published in 1973. The discovery image (256 × 256 pixels) was collected on a Secondary Electron Conduction (SEC) vidicon tube, transferred to the instrument computer after being digitized to 7 bits. Then it was compressed using a simple run-length encoding scheme and sent down to the ground at 200 bit/s. A full, uncompressed image would take 44 minutes to send down to the ground. The telemetry was sent to ground support equipment (GSE) which built up the image onto Polaroid print. David Roberts, an electronics technician working for NRL who had been responsible for the testing of the SEC-vidicon camera, was in charge of day-to-day operations. He thought that his camera had failed because certain areas of the image were much brighter than normal. But on the next image the bright area had moved away from the Sun and he immediately recognized this as being unusual and took it to his supervisor, Dr. Guenter Brueckner, and then to the solar physics branch head, Dr. Tousey. Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing.
Instruments
On 1 November 1994, NASA launched the Wind spacecraft as a solar wind monitor to orbit Earth's Lagrange point as the interplanetary component of the Global Geospace Science (GGS) Program within the International Solar Terrestrial Physics (ISTP) program. The spacecraft is a spin axis-stabilized satellite that carries eight instruments measuring solar wind particles from thermal to greater than MeV energies, electromagnetic radiation from DC to 13 MHz radio waves, and gamma-rays.
On 25 October 2006, NASA launched STEREO, two near-identical spacecraft which, from widely separated points in their orbits, are able to produce the first stereoscopic images of CMEs and other solar activity measurements. The spacecraft orbit the Sun at distances similar to that of Earth, with one slightly ahead of Earth and the other trailing. Their separation gradually increased so that after four years they were almost diametrically opposite each other in orbit.
Notable coronal mass ejections
On 9 March 1989, a CME occurred, which struck Earth four days later on 13 March. It caused power failures in Quebec, Canada and short-wave radio interference.
On 23 July 2012, a massive, and potentially damaging, solar superstorm (solar flare, CME, solar EMP) occurred but missed Earth, an event that many scientists consider to be a Carrington-class event.
On 14 October 2014, an ICME was photographed by the Sun-watching spacecraft PROBA2 (ESA), Solar and Heliospheric Observatory (ESA/NASA), and Solar Dynamics Observatory (NASA) as it left the Sun, and STEREO-A observed its effects directly at . ESA's Venus Express gathered data. The CME reached Mars on 17 October and was observed by the Mars Express, MAVEN, Mars Odyssey, and Mars Science Laboratory missions. On 22 October, at , it reached comet 67P/Churyumov–Gerasimenko, perfectly aligned with the Sun and Mars, and was observed by Rosetta. On 12 November, at , it was observed by Cassini at Saturn. The New Horizons spacecraft was at approaching Pluto when the CME passed three months after the initial eruption, and it may be detectable in the data. Voyager 2 has data that can be interpreted as the passing of the CME, 17 months after. The Curiosity rover's RAD instrument, Mars Odyssey, Rosetta and Cassini showed a sudden decrease in galactic cosmic rays (Forbush decrease) as the CME's protective bubble passed by.
Stellar coronal mass ejections
There have been a small number of CMEs observed on other stars, all of which have been found on red dwarfs. These have been detected mainly by spectroscopy, most often by studying Balmer lines: the material ejected toward the observer causes asymmetry in the blue wing of the line profiles due to Doppler shift. This enhancement can be seen in absorption when it occurs on the stellar disc (the material is cooler than its surroundings), and in emission when it is outside the disc. The observed projected velocities of CMEs range from ≈. There are few stellar CME candidates in shorter wavelengths in UV or X-ray data. Compared to activity on the Sun, CME activity on other stars seems to be far less common. The low number of stellar CME detections can be caused by lower intrinsic CME rates compared to the models (e.g. due to magnetic suppression), projection effects, or overestimated Balmer signatures because of the unknown plasma parameters of the stellar CMEs.
| Physical sciences | Solar System | Astronomy |
255313 | https://en.wikipedia.org/wiki/Saffir%E2%80%93Simpson%20scale | Saffir–Simpson scale | The Saffir–Simpson hurricane wind scale (SSHWS) classifies hurricanes—which in the Western Hemisphere are tropical cyclones that exceed the intensities of tropical depressions and tropical storms—into five categories distinguished by the intensities of their sustained winds. This measuring system was formerly known as the Saffir–Simpson hurricane scale, or SSHS.
To be classified as a hurricane, a tropical cyclone must have one-minute-average maximum sustained winds at above the surface of at least 74 mph (64 kn, 119 km/h; Category 1). The highest classification in the scale, Category 5, consists of storms with sustained winds of at least 157 mph (137 kn, 252 km/h). The classifications can provide some indication of the potential damage and flooding a hurricane will cause upon landfall.
The Saffir–Simpson hurricane wind scale is based on the highest wind speed averaged over a one-minute interval 10 m above the surface. Although the scale shows wind speeds in continuous speed ranges, the US National Hurricane Center and the Central Pacific Hurricane Center assign tropical cyclone intensities in 5-knot (kn) increments (e.g., 100, 105, 110, 115 kn, etc.) because of the inherent uncertainty in estimating the strength of tropical cyclones. Wind speeds in knots are then converted to other units and rounded to the nearest 5 mph or 5 km/h.
The Saffir–Simpson hurricane wind scale is used officially only to describe hurricanes that form in the Atlantic Ocean and northern Pacific Ocean east of the International Date Line. Other areas use different scales to label these storms, which are called cyclones or typhoons, depending on the area. These areas (except the JTWC) use three-minute or ten-minute averaged winds to determine the maximum sustained wind speed, creating an important difference which frustrates direct comparison between maximum wind speeds of storms measured using the Saffir–Simpson hurricane wind scale (usually 14% more intense) and those measured using a ten-minute interval (usually 12% less intense).
There is some criticism of the SSHWS for not accounting for rain, storm surge, and other important factors, but SSHWS defenders say that part of the goal of SSHWS is to be straightforward and simple to understand. There have been proposals for the addition of higher categories to the scale, which would then set a maximum cutoff for Category 5, but none have been adopted .
History
In 1971, the scale was developed by civil engineer Herbert Saffir and meteorologist Robert Simpson, who at the time was director of the U.S. National Hurricane Center (NHC). In 1973, the scale was introduced to the general public, and saw widespread use after Neil Frank replaced Simpson at the helm of the NHC in 1974.
The scale was created by Herbert Saffir, a structural engineer, who in 1969 was commissioned by the United Nations to study low-cost housing in hurricane-prone areas. In 1971, while conducting the study, Saffir realized there was no simple scale for describing the likely effects of a hurricane. By using subjective damage-based scales for earthquake intensity like the Modified Mercalli intensity scale or MSK-64 intensity scale and the objective numerical gradation method of the Richter scale as models, he proposed a simplified 1–5 grading scale as a guide for areas that do not have hurricane building codes. The grades were based on two main factors: objective wind gust speeds sustaining for 2–3 seconds at an elevation of 9.2 meters, and subjective levels of structural damage.
Saffir gave the proposed scale to the NHC for their use, where Simpson changed the terminology from "grade" to "category", organized them by sustained wind speeds of 1 minute duration, and added storm surge height ranges, adding barometric pressure ranges later on. In 1975, the Saffir-Simpson Scale was first published publicly.
In 2009, the NHC eliminated pressure and storm surge ranges from the categories, transforming it into a pure wind scale, called the Saffir–Simpson Hurricane Wind Scale (Experimental) [SSHWS]. The updated scale became operational on May 15, 2010. The scale excludes flood ranges, storm surge estimations, rainfall, and location, which means a Category 2 hurricane that hits a major city will likely do far more cumulative damage than a Category 5 hurricane that hits a rural area. The agency cited examples of hurricanes as reasons for removing "scientifically inaccurate" information, including Hurricane Katrina (2005) and Hurricane Ike (2008), which both had stronger than estimated storm surges, and Hurricane Charley (2004), which had weaker than estimated storm surge. Since being removed from the Saffir–Simpson hurricane wind scale, storm surge prediction and modeling is handled by computer numerical models such as ADCIRC and SLOSH.
In 2012, the NHC extended the wind speed range for Category 4 by 1 mph in both directions, to 130–156 mph, with corresponding changes in the other units (113–136 kn, 209–251 km/h), instead of 131–155 mph (114–135 kn, 210–249 km/h). The NHC and the Central Pacific Hurricane Center assign tropical cyclone intensities in 5 knot increments, and then convert to mph and km/h with a similar rounding for other reports. So an intensity of 115 kn is rated Category 4, but the conversion to miles per hour (132.3 mph) would round down to 130 mph, making it appear to be a Category 3 storm. Likewise, an intensity of 135 kn (~155 mph, and thus Category 4) is 250.02 km/h, which, according to the definition used before the change would be Category 5.
To resolve these issues, the NHC had been obliged to incorrectly report storms with wind speeds of 115 kn as 135 mph, and 135 kn as 245 km/h. The change in definition allows storms of 115 kn to be correctly rounded down to 130 mph, and storms of 135 kn to be correctly reported as 250 km/h, and still qualify as Category 4. Since the NHC had previously rounded incorrectly to keep storms in Category 4 in each unit of measure, the change does not affect the classification of storms from previous years. The new scale became operational on May 15, 2012.
Categories
The scale separates hurricanes into five different categories based on wind. The U.S. National Hurricane Center classifies hurricanes of Category 3 and above as major hurricanes. The Joint Typhoon Warning Center classifies typhoons of 150 mph (240 km/h) or greater (strong Category 4 and Category 5) as super typhoons. Most weather agencies use the definition for sustained winds recommended by the World Meteorological Organization (WMO), which specifies measuring winds at a height of for 10 minutes, and then taking the average. By contrast, the U.S. National Weather Service, Central Pacific Hurricane Center and the Joint Typhoon Warning Center define sustained winds as average winds over a period of one minute, measured at the same height, and that is the definition used for this scale.
The five categories are described in the following subsections, in order of increasing intensity. Example hurricanes for each category are limited to those which made landfall at their maximum achieved category on the scale.
Category 1
Very dangerous winds will produce some damage
Category 1 storms usually cause no significant structural damage to most well-constructed permanent structures. They can topple unanchored mobile homes, as well as uproot or snap weak trees. Poorly attached roof shingles or tiles can blow off. Coastal flooding and pier damage are often associated with Category 1 storms. Power outages are typically widespread to extensive, sometimes lasting several days. Even though it is the least intense type of hurricane, they can still produce widespread damage and can be life-threatening storms.
Hurricanes that peaked at Category 1 intensity and made landfall at that intensity include: Juan (1985), Ismael (1995), Danny (1997), Stan (2005), Humberto (2007), Isaac (2012), Manuel (2013), Earl (2016), Newton (2016), Nate (2017), Barry (2019), Lorena (2019), Hanna (2020), Isaias (2020), Nicholas (2021), Julia (2022), Lisa (2022), Nicole (2022), Debby (2024), and Oscar (2024).
Category 2
Extremely dangerous winds will cause extensive damage
Storms of Category 2 intensity often damage roofing material, sometimes exposing the roof, and inflict damage upon poorly constructed doors and windows. Poorly constructed signs and piers can receive considerable damage and many trees are uprooted or snapped. Mobile homes, whether anchored or not, are typically damaged and sometimes destroyed, and many manufactured homes suffer structural damage. Small craft in unprotected anchorages may break their moorings. Extensive to near-total power outages and scattered loss of potable water are likely, possibly lasting many days.
Hurricanes that peaked at Category 2 intensity and made landfall at that intensity include: Alice (1954), Ella (1958), Ginny (1963), Fifi (1974), Diana (1990), Gert (1993), Rosa (1994), Erin (1995), Alma (1996), Marty (2003), Juan (2003), Alex (2010), Tomas (2010), Carlotta (2012), Arthur (2014), Sally (2020), Olaf (2021), Rick (2021), Agatha (2022), and Francine (2024).
Category 3
Devastating damage will occur
Tropical cyclones of Category 3 and higher are described as major hurricanes in the Atlantic, Eastern Pacific, and Central Pacific basins. These storms can cause some structural damage to small residences and utility buildings, particularly those of wood frame or manufactured materials with minor curtain wall failures. Buildings that lack a solid foundation, such as mobile homes, are usually destroyed, and gable-end roofs are peeled off.
Manufactured homes usually sustain severe and irreparable damage. Flooding near the coast destroys smaller structures, while larger structures are struck by floating debris. A large number of trees are uprooted or snapped, isolating many areas. Terrain may be flooded well inland. Near-total to total power loss is likely for up to several weeks. Home water access will likely be lost or contaminated.
Hurricanes that peaked at Category 3 intensity and made landfall at that intensity include: Easy (1950), Carol (1954), Hilda (1955), Audrey (1957), Olivia (1967), Ella (1970), Eloise (1975), Alicia (1983), Elena (1985), Roxanne (1995), Fran (1996), Isidore (2002), Jeanne (2004), Lane (2006), Karl (2010), Otto (2016), Zeta (2020), Grace (2021), John (2024), and Rafael (2024).
Category 4
Catastrophic damage will occur
Category 4 hurricanes tend to produce more extensive curtainwall failures, with some complete structural failure on small residences. Heavy, irreparable damage and near-complete destruction of gas station canopies and other wide span overhang type structures are common. Mobile and manufactured homes are often flattened. Most trees, except for the hardiest, are uprooted or snapped, isolating many areas. These storms cause extensive beach erosion. Terrain may be flooded far inland. Total and long-lived electrical and water losses are to be expected, possibly for many weeks.
The 1900 Galveston hurricane, the deadliest natural disaster to hit the United States, peaked at an intensity that corresponds to a modern-day Category 4 storm. Other examples of storms that peaked at Category 4 intensity and made landfall at that intensity include: Hazel (1954), Gracie (1959), Donna (1960), Carla (1961), Flora (1963), Betsy (1965), Celia (1970), Carmen (1974), Madeline (1976), Frederic (1979), Joan (1988), Iniki (1992), Charley (2004), Dennis (2005), Ike (2008), Harvey (2017), Laura (2020), Ida (2021), Lidia (2023), and Helene (2024).
Category 5
Catastrophic damage will occur
Category 5 is the highest category of the Saffir–Simpson scale. These storms cause complete roof failure on many residences and industrial buildings, and some complete building failures with small utility buildings blown over or away. The collapse of many wide-span roofs and walls, especially those with no interior supports, is common. Very heavy and irreparable damage to many wood-frame structures and total destruction to mobile/manufactured homes is prevalent.
Only a few types of structures are capable of surviving intact, and only if located at least inland. They include office, condominium and apartment buildings and hotels that are of solid concrete or steel frame construction, multi-story concrete parking garages, and residences that are made of either reinforced brick or concrete/cement block and have hipped roofs with slopes of no less than 35 degrees from horizontal and no overhangs of any kind, and if the windows are either made of hurricane-resistant safety glass or covered with shutters. Unless most of these requirements are met, the catastrophic destruction of a structure may occur.
The storm's flooding causes major damage to the lower floors of all structures near the shoreline. Many coastal structures can be completely flattened or washed away by the storm surge. Virtually all trees are uprooted or snapped and some may be debarked, isolating most affected communities. Massive evacuation of residential areas may be required if the hurricane threatens populated areas. Total and extremely long-lived power outages and water losses are to be expected, possibly for up to several months.
Historical examples of storms that made landfall at Category 5 status include: "Cuba" (1924), "Okeechobee" (1928), "Bahamas" (1932), "Cuba–Brownsville" (1933), "Labor Day" (1935), Janet (1955), Inez (1966), Camille (1969), Edith (1971), Anita (1977), David (1979), Gilbert (1988), Andrew (1992), Dean (2007), Felix (2007), Irma (2017), Maria (2017), Michael (2018), Dorian (2019), and Otis (2023) (the only Pacific hurricane to make landfall at Category 5 intensity).
Criticism
Some scientists, including Kerry Emanuel and Lakshmi Kantha, have criticized the scale as being too simplistic, namely that the scale takes into account neither the physical size of a storm nor the amount of precipitation it produces. They and others point out that the Saffir–Simpson scale, unlike the moment magnitude scale used to measure earthquakes, is not continuous, and is quantized into a small number of categories. Proposed replacement classifications include the Hurricane Intensity Index, which is based on the dynamic pressure caused by a storm's winds, and the Hurricane Hazard Index, which is based on surface wind speeds, the radius of maximum winds of the storm, and its translational velocity. Both of these scales are continuous, akin to the Richter scale. However, neither of these scales has been used by officials.
Proposed extensions
After the series of powerful storm systems of the 2005 Atlantic hurricane season, as well as after Hurricane Patricia, a few newspaper columnists and scientists brought up the suggestion of introducing Category 6. They have suggested pegging Category 6 to storms with winds greater than . Fresh calls were made for consideration of the issue after Hurricane Irma in 2017, which was the subject of a number of seemingly credible false news reports as a "Category 6" storm, partly in consequence of so many local politicians using the term. Only a few storms of this intensity have been recorded.
Of the 42 hurricanes currently considered to have attained Category 5 status in the Atlantic, 19 had wind speeds at or greater. Only 9 had wind speeds at or greater (the 1935 Labor Day hurricane, Allen, Gilbert, Mitch, Rita, Wilma, Irma, Dorian, and Milton). Of the 21 hurricanes currently considered to have attained Category 5 status in the eastern Pacific, only 5 had wind speeds at or greater (Patsy, John, Linda, Rick, and Patricia). Only 3 had wind speeds at or greater (Linda, Rick, and Patricia).
Most storms which would be eligible for this category were typhoons in the western Pacific, most notably typhoons Tip, Halong, Mawar, and Bolaven in 1979, 2019, 2023 and 2023 respectively, each with sustained winds of , and typhoons Haiyan, Meranti, Goni, and Surigae in 2013, 2016, 2020 and 2021 respectively, each with sustained winds of .
Occasionally, suggestions of using even higher wind speeds as the cutoff have been made. In a newspaper article published in November 2018, NOAA research scientist Jim Kossin said that the potential for more intense hurricanes was increasing as the climate warmed, and suggested that Category 6 would begin at , with a further hypothetical Category 7 beginning at . In 2024 another proposal to add "Category 6" was made, with a minimum wind speed of , with risk factors such as the effects of climate change and warming ocean temperatures part of that research. In the NHC area of responsibility, only Patricia had winds greater than .
According to Robert Simpson, co-creator of the scale, there are no reasons for a Category 6 on the Saffir–Simpson scale because it is designed to measure the potential damage of a hurricane to human-made structures. Simpson explained that "... when you get up into winds in excess of you have enough damage if that extreme wind sustains itself for as much as six seconds on a building it's going to cause rupturing damages that are serious no matter how well it's engineered." Nonetheless, the counties of Broward and Miami-Dade in Florida have building codes which require that critical infrastructure buildings be able to withstand Category 5 winds.
| Physical sciences | Storms | Earth science |
255415 | https://en.wikipedia.org/wiki/New%20World%20barbet | New World barbet | The New World barbets are a family, Capitonidae, of 15 birds in the order Piciformes, which inhabit humid forests in Central and South America. They are closely related to the toucans.
The New World barbets are plump birds, with short necks and large heads. They get their name from the bristles that fringe their heavy bills. Most species are brightly coloured and live in tropical forest.
These barbets are mostly arboreal birds, which nest in tree holes dug by breeding pairs, laying two to four eggs. They eat fruit and insects. These birds do not migrate.
Taxonomy
Fossil New World barbets have been found dating from the Miocene in Florida. The closest relatives of the barbets are the toucans, and these two families are also closely related to the honeyguides and woodpeckers (with which they form the order Piciformes).
Formerly, the barbets have been treated as one family. This has turned out to be paraphyletic, though, with regard to toucans; thus, only the New World true barbets are retained in the Capitonidae. The African barbets (Lybiidae) and the Asian barbets (Megalaimidae), as well as the two toucan-barbets from the Americas (Semnornithidae) are currently split from this family. Alternatively, the toucans, which evolved from a common ancestor shared with the American barbets, might be included in the traditional all-encompassing barbet family. As they have evolved characteristics that are unique to themselves, they are usually treated separately, thus the barbets are split up according to the four lineages.
The phylogenetic relationship between the New World barbets and the eight other families in the order Piciformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Fossils
Genus Capitonides (Early – Middle Miocene of Europe) (fossil)
†Capitonides europeus
Ecology
While most New World barbet species inhabit lowland forest, some range into montane and temperate forests, as well. Most are restricted to habitats containing trees with dead wood, which are used for nesting.
The diet of barbets is mixed, with fruit being the dominant part. Small prey items are also taken, especially when nesting. Barbets are capable of shifting their diet quickly in the face of changing food availability. Numerous species of fruiting trees and bushes are visited; an individual barbet may feed on as many as 60 different species in its range. They also visit plantations and take cultivated fruit and vegetables. Fruit is eaten whole, and indigestible material such as seed pits is regurgitated later (often before singing). Regurgitation does not usually happen in the nest (as happens with toucans). Like their relatives, New World barbets are thought to be important agents in seed dispersal in tropical forests.
As well as taking fruit, they also take arthropod prey, gleaned from the branches and trunks of trees. A wide range of insects is taken, including ants, beetles, and moths. Scorpions and centipedes are also taken, and a few species take small vertebrates such as frogs.
Relationship with humans
New World barbets have little direct impact on humans. The loss of forest can have a deleterious effect on barbet species dependent on old growth, to the benefit of species that favor more disturbed or open habitat.
Three species of New World barbets are listed as threatened by the IUCN: The white-mantled barbet of Colombia is listed as endangered and the five-coloured barbet as vulnerable, the two having a relatively small range threatened by deforestation for the timber industry and to create space for agriculture (including coca and marijuana) and livestock, and mining. The quite recently discovered scarlet-banded barbet of Peru is considered vulnerable due to its small population size (estimated at under 1000 birds), although its remote habitat is not immediately threatened.
| Biology and health sciences | Piciformes | Animals |
255432 | https://en.wikipedia.org/wiki/Honeyguide | Honeyguide | Honeyguides (family Indicatoridae) are a family of birds in the order Piciformes. They are also known as indicator birds, or honey birds, although the latter term is also used more narrowly to refer to species of the genus Prodotiscus. They have an Old World tropical distribution, with the greatest number of species in Africa and two in Asia. These birds are best known for their interaction with humans. Honeyguides are noted and named for one or two species that will deliberately lead humans (but, contrary to popular claims, most likely not honey badgers) directly to bee colonies, so that they can feast on the grubs and beeswax that are left behind.
Taxonomy
The Indicatoridae were noted for their barbet-like structure and brood-parasitic behavior and morphologically considered unique among the non-passerines in having nine primaries. The phylogenetic relationship between the honeyguides and the eight other families that make up the order Piciformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Description
Most honeyguides are dull-colored, though some have bright yellow coloring in the plumage. All have light outer tail feathers, which are white in all the African species. The smallest species by body mass appears to be the green-backed honeyguide, at an average of , and by length appears to be the Cassin's honeyguide, at an average of , while the largest species by weight is the lyre-tailed honeyguide, at , and by length, is the greater honeyguide, at .
They are among the few birds that feed regularly on wax—beeswax in most species, and presumably the waxy secretions of scale insects in the genus Prodotiscus and to a lesser extent in Melignomon and the smaller species of Indicator. They also feed on waxworms which are the larvae of the waxmoth Galleria mellonella, on bee colonies, and on flying and crawling insects, spiders, and occasional fruits. Many species join mixed-species feeding flocks.
Behavior
Guiding
Honeyguides are named for a remarkable habit seen in one or two species: guiding humans to bee colonies. Once the hive is open and the honey is taken, the bird feeds on larvae and wax. This behavior has been studied in the greater honeyguide; some authorities (following Friedmann, 1955) state that it also occurs in the scaly-throated honeyguide, while others disagree. Wild honeyguides understand various types of human calls that attract them to engage in the foraging mutualism. In northern Tanzania, honeyguides partner with Hadza hunter-gatherers, and the bird assistance has been shown to increase honey-hunters' rates of finding bee colonies by 560%, and led men to significantly higher yielding nests than those found without honeyguides. Contrary to most depictions of the human-honeyguide relationship, the Hadza did not actively repay honeyguides, but instead, hid, buried, and burned honeycomb, with the intent of keeping the bird hungry and thus more likely to guide again. Some experts believe that honeyguide co-evolution with humans goes back to the stone-tool making human ancestor Homo erectus, about 1.9million years ago. Despite popular belief, no evidence indicates that honeyguides guide the honey badger; though videos about this exist, there have been accusations that they were staged.
Although most members of the family are not known to recruit "followers" in their quest for wax, they are also referred to as "honeyguides" by linguistic extrapolation.
Breeding
The breeding behavior of eight species in Indicator and Prodotiscus is known. They are all brood parasites that lay one egg in a nest of another species, laying eggs in series of about five during a period of 5–7 days. Most favor hole-nesting species, often the related barbets and woodpeckers, but Prodotiscus parasitizes cup-nesters such as white-eyes and warblers. Honeyguide nestlings have been known to physically eject their hosts' chicks from the nests and they have needle-sharp hooks on their beaks with which they puncture the hosts' eggs or kill the nestlings.
African honeyguide birds are known to lay their eggs in underground nests of other bee-eating bird species. The honeyguide chicks kill the hatchlings of the host using their needle-sharp beaks just after hatching, much as cuckoo hatchlings do. The honeyguide mother ensures her chick hatches first by internally incubating the egg for an extra day before laying it, so that it has a head start in development compared to the hosts' offspring.
| Biology and health sciences | Piciformes | null |
255446 | https://en.wikipedia.org/wiki/Thermodynamic%20potential | Thermodynamic potential | A thermodynamic potential (or more accurately, a thermodynamic potential energy) is a scalar quantity used to represent the thermodynamic state of a system. Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886. Josiah Willard Gibbs in his papers used the term fundamental functions. While thermodynamic potentials cannot be measured directly, they can be predicted using computational chemistry.
One main thermodynamic potential that has a physical interpretation is the internal energy . It is the energy of configuration of a given system of conservative forces (that is why it is called potential) and only has meaning with respect to a defined set of references (or data). Expressions for all other thermodynamic energy potentials are derivable via Legendre transforms from an expression for . In other words, each thermodynamic potential is equivalent to other thermodynamic potentials; each potential is a different expression of the others.
In thermodynamics, external forces, such as gravity, are counted as contributing to total energy rather than to thermodynamic potentials. For example, the working fluid in a steam engine sitting on top of Mount Everest has higher total energy due to gravity than it has at the bottom of the Mariana Trench, but the same thermodynamic potentials. This is because the gravitational potential energy belongs to the total energy rather than to thermodynamic potentials such as internal energy.
Description and interpretation
Five common thermodynamic potentials are:
where = temperature, = entropy, = pressure, = volume. is the number of particles of type in the system and is the chemical potential for an -type particle. The set of all are also included as natural variables but may be ignored when no chemical reactions are occurring which cause them to change. The Helmholtz free energy is in ISO/IEC standard called Helmholtz energy or Helmholtz function. It is often denoted by the symbol , but the use of is preferred by IUPAC, ISO and IEC.
These five common potentials are all potential energies, but there are also entropy potentials. The thermodynamic square can be used as a tool to recall and derive some of the potentials.
Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings like the below:
Internal energy () is the capacity to do work plus the capacity to release heat.
Gibbs energy () is the capacity to do non-mechanical work.
Enthalpy () is the capacity to do non-mechanical work plus the capacity to release heat.
Helmholtz energy () is the capacity to do mechanical work plus non-mechanical work.
From these meanings (which actually apply in specific conditions, e.g. constant pressure, temperature, etc.), for positive changes (e.g., ), we can say that is the energy added to the system, is the total work done on it, is the non-mechanical work done on it, and is the sum of non-mechanical work done on the system and the heat given to it.
Note that the sum of internal energy is conserved, but the sum of Gibbs energy, or Helmholtz energy, are not conserved, despite being named "energy". They can be better interpreted as the potential to perform "useful work", and the potential can be wasted.
Thermodynamic potentials are very useful when calculating the equilibrium results of a chemical reaction, or when measuring the properties of materials in a chemical reaction. The chemical reactions usually take place under some constraints such as constant pressure and temperature, or constant entropy and volume, and when this is true, there is a corresponding thermodynamic potential that comes into play. Just as in mechanics, the system will tend towards a lower value of a potential and at equilibrium, under these constraints, the potential will take the unchanging minimum value. The thermodynamic potentials can also be used to estimate the total amount of energy available from a thermodynamic system under the appropriate constraint.
In particular: (see principle of minimum energy for a derivation)
When the entropy and "external parameters" (e.g. volume) of a closed system are held constant, the internal energy decreases and reaches a minimum value at equilibrium. This follows from the first and second laws of thermodynamics and is called the principle of minimum energy. The following three statements are directly derivable from this principle.
When the temperature and external parameters of a closed system are held constant, the Helmholtz free energy decreases and reaches a minimum value at equilibrium.
When the pressure and external parameters of a closed system are held constant, the enthalpy decreases and reaches a minimum value at equilibrium.
When the temperature , pressure and external parameters of a closed system are held constant, the Gibbs free energy decreases and reaches a minimum value at equilibrium.
Natural variables
For each thermodynamic potential, there are thermodynamic variables that need to be held constant to specify the potential value at a thermodynamical equilibrium state, such as independent variables for a mathematical function. These variables are termed the natural variables of that potential. The natural variables are important not only to specify the potential value at the equilibrium, but also because if a thermodynamic potential can be determined as a function of its natural variables, all of the thermodynamic properties of the system can be found by taking partial derivatives of that potential with respect to its natural variables and this is true for no other combination of variables. If a thermodynamic potential is not given as a function of its natural variables, it will not, in general, yield all of the thermodynamic properties of the system.
The set of natural variables for each of the above four thermodynamic potentials is formed from a combination of the , , , variables, excluding any pairs of conjugate variables; there is no natural variable set for a potential including the - or - variables together as conjugate variables for energy. An exception for this rule is the − conjugate pairs as there is no reason to ignore these in the thermodynamic potentials, and in fact we may additionally define the four potentials for each species. Using IUPAC notation in which the brackets contain the natural variables (other than the main four), we have:
If there is only one species, then we are done. But, if there are, say, two species, then there will be additional potentials such as and so on. If there are dimensions to the thermodynamic space, then there are unique thermodynamic potentials. For the most simple case, a single phase ideal gas, there will be three dimensions, yielding eight thermodynamic potentials.
Fundamental equations
The definitions of the thermodynamic potentials may be differentiated and, along with the first and second laws of thermodynamics, a set of differential equations known as the fundamental equations follow. (Actually they are all expressions of the same fundamental thermodynamic relation, but are expressed in different variables.) By the first law of thermodynamics, any differential change in the internal energy of a system can be written as the sum of heat flowing into the system subtracted by the work done by the system on the environment, along with any change due to the addition of new particles to the system:
where is the infinitesimal heat flow into the system, and is the infinitesimal work done by the system, is the chemical potential of particle type and is the number of the type particles. (Neither nor are exact differentials, i.e., they are thermodynamic process path-dependent. Small changes in these variables are, therefore, represented with rather than .)
By the second law of thermodynamics, we can express the internal energy change in terms of state functions and their differentials. In case of reversible changes we have:
where
is temperature,
is entropy,
is pressure,
and is volume, and the equality holds for reversible processes.
This leads to the standard differential form of the internal energy in case of a quasistatic reversible change:
Since , and are thermodynamic functions of state (also called state functions), the above relation also holds for arbitrary non-reversible changes. If the system has more external variables than just the volume that can change, the fundamental thermodynamic relation generalizes to:
Here the are the generalized forces corresponding to the external variables .
Applying Legendre transforms repeatedly, the following differential relations hold for the four potentials (fundamental thermodynamic equations or fundamental thermodynamic relation):
The infinitesimals on the right-hand side of each of the above equations are of the natural variables of the potential on the left-hand side. Similar equations can be developed for all of the other thermodynamic potentials of the system. There will be one fundamental equation for each thermodynamic potential, resulting in a total of fundamental equations.
The differences between the four thermodynamic potentials can be summarized as follows:
Equations of state
We can use the above equations to derive some differential definitions of some thermodynamic parameters. If we define to stand for any of the thermodynamic potentials, then the above equations are of the form:
where and are conjugate pairs, and the are the natural variables of the potential . From the chain rule it follows that:
where is the set of all natural variables of except that are held as constants. This yields expressions for various thermodynamic parameters in terms of the derivatives of the potentials with respect to their natural variables. These equations are known as equations of state since they specify parameters of the thermodynamic state. If we restrict ourselves to the potentials (Internal energy), (Helmholtz energy), (Enthalpy) and (Gibbs energy), then we have the following equations of state (subscripts showing natural variables that are held as constants):
where, in the last equation, is any of the thermodynamic potentials (, , , or ), and are the set of natural variables for that potential, excluding . If we use all thermodynamic potentials, then we will have more equations of state such as
and so on. In all, if the thermodynamic space is dimensions, then there will be equations for each potential, resulting in a total of equations of state because thermodynamic potentials exist. If the equations of state for a particular potential are known, then the fundamental equation for that potential (i.e., the exact differential of the thermodynamic potential) can be determined. This means that all thermodynamic information about the system will be known because the fundamental equations for any other potential can be found via the Legendre transforms and the corresponding equations of state for each potential as partial derivatives of the potential can also be found.
Measurement of thermodynamic potentials
The above equations of state suggest methods to experimentally measure changes in the thermodynamic potentials using physically measurable parameters. For example the free energy expressions
and
can be integrated at constant temperature and quantities to obtain:
(at constant T, {Nj} )
(at constant T, {Nj} )
which can be measured by monitoring the measurable variables of pressure, temperature and volume. Changes in the enthalpy and internal energy can be measured by calorimetry (which measures the amount of heat ΔQ released or absorbed by a system). The expressions
can be integrated:
(at constant P, {Nj} )
(at constant V, {Nj} )
Note that these measurements are made at constant {Nj } and are therefore not applicable to situations in which chemical reactions take place.
Maxwell relations
Again, define and to be conjugate pairs, and the to be the natural variables of some potential . We may take the "cross differentials" of the state equations, which obey the following relationship:
From these we get the Maxwell relations. There will be of them for each potential giving a total of equations in all. If we restrict ourselves the , , ,
Using the equations of state involving the chemical potential we get equations such as:
and using the other potentials we can get equations such as:
Euler relations
Again, define and to be conjugate pairs, and the to be the natural variables of the internal energy.
Since all of the natural variables of the internal energy are extensive quantities
it follows from Euler's homogeneous function theorem that the internal energy can be written as:
From the equations of state, we then have:
This formula is known as an Euler relation, because Euler's theorem on homogeneous functions leads to it. (It was not discovered by Euler in an investigation of thermodynamics, which did not exist in his day.).
Substituting into the expressions for the other main potentials we have:
As in the above sections, this process can be carried out on all of the other thermodynamic potentials. Thus, there is another Euler relation, based on the expression of entropy as a function of internal energy and other extensive variables. Yet other Euler relations hold for other fundamental equations for energy or entropy, as respective functions of other state variables including some intensive state variables.
Gibbs–Duhem relation
Deriving the Gibbs–Duhem equation from basic thermodynamic state equations is straightforward. Equating any thermodynamic potential definition with its Euler relation expression yields:
Differentiating, and using the second law:
yields:
Which is the Gibbs–Duhem relation. The Gibbs–Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with components, there will be independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Josiah Willard Gibbs and Pierre Duhem.
Stability conditions
As the internal energy is a convex function of entropy and volume, the stability condition requires that the second derivative of internal energy with entropy or volume to be positive. It is commonly expressed as . Since the maximum principle of entropy is equivalent to minimum principle of internal energy, the combined criteria for stability or thermodynamic equilibrium is expressed as and for parameters, entropy and volume. This is analogous to and condition for entropy at equilibrium. The same concept can be applied to the various thermodynamic potentials by identifying if they are convex or concave of respective their variables.
and
Where Helmholtz energy is a concave function of temperature and convex function of volume.
and
Where enthalpy is a concave function of pressure and convex function of entropy.
and
Where Gibbs potential is a concave function of both pressure and temperature.
In general the thermodynamic potentials (the internal energy and its Legendre transforms), are convex functions of their extrinsic variables and concave functions of intrinsic variables. The stability conditions impose that isothermal compressibility is positive and that for non-negative temperature, .
Chemical reactions
Changes in these quantities are useful for assessing the degree to which a chemical reaction will proceed. The relevant quantity depends on the reaction conditions, as shown in the following table. denotes the change in the potential and at equilibrium the change will be zero.
Most commonly one considers reactions at constant and , so the Gibbs free energy is the most useful potential in studies of chemical reactions.
| Physical sciences | Thermodynamics | Physics |
255447 | https://en.wikipedia.org/wiki/Helmholtz%20free%20energy | Helmholtz free energy | In thermodynamics, the Helmholtz free energy (or Helmholtz energy) is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.
In contrast, the Gibbs free energy or free enthalpy is most commonly used as a measure of thermodynamic potential (especially in chemistry) when it is convenient for applications that occur at constant pressure. For example, in explosives research Helmholtz free energy is often used, since explosive reactions by their nature induce pressure changes. It is also frequently used to define fundamental equations of state of pure substances.
The concept of free energy was developed by Hermann von Helmholtz, a German physicist, and first presented in 1882 in a lecture called "On the thermodynamics of chemical processes". From the German word Arbeit (work), the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol A and the name Helmholtz energy. In physics, the symbol F is also used in reference to free energy or Helmholtz function.
Definition
The Helmholtz free energy is defined as
where
F is the Helmholtz free energy (sometimes also called A, particularly in the field of chemistry) (SI: joules, CGS: ergs),
U is the internal energy of the system (SI: joules, CGS: ergs),
T is the absolute temperature (kelvins) of the surroundings, modelled as a heat bath,
S is the entropy of the system (SI: joules per kelvin, CGS: ergs per kelvin).
The Helmholtz energy is the Legendre transformation of the internal energy U, in which temperature replaces entropy as the independent variable.
Formal development
The first law of thermodynamics in a closed system provides
where is the internal energy, is the energy added as heat, and is the work done on the system. The second law of thermodynamics for a reversible process yields . In case of a reversible change, the work done can be expressed as (ignoring electrical and other non-PV work) and so:
Applying the product rule for differentiation to , it follows
and
The definition of allows us to rewrite this as
Because F is a thermodynamic function of state, this relation is also valid for a process (without electrical work or composition change) that is not reversible.
Minimum free energy and maximum work principles
The laws of thermodynamics are only directly applicable to systems in thermal equilibrium. If we wish to describe phenomena like chemical reactions, then the best we can do is to consider suitably chosen initial and final states in which the system is in (metastable) thermal equilibrium. If the system is kept at fixed volume and is in contact with a heat bath at some constant temperature, then we can reason as follows.
Since the thermodynamical variables of the system are well defined in the initial state and the final state, the internal energy increase , the entropy increase , and the total amount of work that can be extracted, performed by the system, , are well defined quantities. Conservation of energy implies
The volume of the system is kept constant. This means that the volume of the heat bath does not change either, and we can conclude that the heat bath does not perform any work. This implies that the amount of heat that flows into the heat bath is given by
The heat bath remains in thermal equilibrium at temperature T no matter what the system does. Therefore, the entropy change of the heat bath is
The total entropy change is thus given by
Since the system is in thermal equilibrium with the heat bath in the initial and the final states, T is also the temperature of the system in these states. The fact that the system's temperature does not change allows us to express the numerator as the free energy change of the system:
Since the total change in entropy must always be larger or equal to zero, we obtain the inequality
We see that the total amount of work that can be extracted in an isothermal process is limited by the free-energy decrease, and that increasing the free energy in a reversible process requires work to be done on the system. If no work is extracted from the system, then
and thus for a system kept at constant temperature and volume and not capable of performing electrical or other non-PV work, the total free energy during a spontaneous change can only decrease.
This result seems to contradict the equation dF = −S dT − P dV, as keeping T and V constant seems to imply dF = 0, and hence F = constant. In reality there is no contradiction: In a simple one-component system, to which the validity of the equation dF = −S dT − P dV is restricted, no process can occur at constant T and V, since there is a unique P(T, V) relation, and thus T, V, and P are all fixed. To allow for spontaneous processes at constant T and V, one needs to enlarge the thermodynamical state space of the system. In case of a chemical reaction, one must allow for changes in the numbers Nj of particles of each type j. The differential of the free energy then generalizes to
where the are the numbers of particles of type j and the are the corresponding chemical potentials. This equation is then again valid for both reversible and non-reversible changes. In case of a spontaneous change at constant T and V, the last term will thus be negative.
In case there are other external parameters, the above relation further generalizes to
Here the are the external variables, and the the corresponding generalized forces.
Relation to the canonical partition function
A system kept at constant volume, temperature, and particle number is described by the canonical ensemble. The probability of finding the system in some energy eigenstate r, for any microstate i, is given by
where
is the energy of accessible state
Z is called the partition function of the system. The fact that the system does not have a unique energy means that the various thermodynamical quantities must be defined as expectation values. In the thermodynamical limit of infinite system size, the relative fluctuations in these averages will go to zero.
The average internal energy of the system is the expectation value of the energy and can be expressed in terms of Z as follows:
If the system is in state r, then the generalized force corresponding to an external variable x is given by
The thermal average of this can be written as
Suppose that the system has one external variable . Then changing the system's temperature parameter by and the external variable by will lead to a change in :
If we write as
we get
This means that the change in the internal energy is given by
In the thermodynamic limit, the fundamental thermodynamic relation should hold:
This then implies that the entropy of the system is given by
where c is some constant. The value of c can be determined by considering the limit T → 0. In this limit the entropy becomes , where is the ground-state degeneracy. The partition function in this limit is , where is the ground-state energy. Thus, we see that and that
Relating free energy to other variables
Combining the definition of Helmholtz free energy
along with the fundamental thermodynamic relation
one can find expressions for entropy, pressure and chemical potential:
These three equations, along with the free energy in terms of the partition function,
allow an efficient way of calculating thermodynamic variables of interest given the partition function and are often used in density of state calculations. One can also do Legendre transformations for different systems. For example, for a system with a magnetic field or potential, it is true that
Bogoliubov inequality
Computing the free energy is an intractable problem for all but the simplest models in statistical physics. A powerful approximation method is mean-field theory, which is a variational method based on the Bogoliubov inequality. This inequality can be formulated as follows.
Suppose we replace the real Hamiltonian of the model by a trial Hamiltonian , which has different interactions and may depend on extra parameters that are not present in the original model. If we choose this trial Hamiltonian such that
where both averages are taken with respect to the canonical distribution defined by the trial Hamiltonian , then the Bogoliubov inequality states
where is the free energy of the original Hamiltonian, and is the free energy of the trial Hamiltonian. We will prove this below.
By including a large number of parameters in the trial Hamiltonian and minimizing the free energy, we can expect to get a close approximation to the exact free energy.
The Bogoliubov inequality is often applied in the following way. If we write the Hamiltonian as
where is some exactly solvable Hamiltonian, then we can apply the above inequality by defining
Here we have defined to be the average of X over the canonical ensemble defined by . Since defined this way differs from by a constant, we have in general
where is still the average over , as specified above. Therefore,
and thus the inequality
holds. The free energy is the free energy of the model defined by plus . This means that
and thus
Proof of the Bogoliubov inequality
For a classical model we can prove the Bogoliubov inequality as follows. We denote the canonical probability distributions for the Hamiltonian and the trial Hamiltonian by and , respectively. From Gibbs' inequality we know that:
holds. To see this, consider the difference between the left hand side and the right hand side. We can write this as:
Since
it follows that:
where in the last step we have used that both probability distributions are normalized to 1.
We can write the inequality as:
where the averages are taken with respect to . If we now substitute in here the expressions for the probability distributions:
and
we get:
Since the averages of and are, by assumption, identical we have:
Here we have used that the partition functions are constants with respect to taking averages and that the free energy is proportional to minus the logarithm of the partition function.
We can easily generalize this proof to the case of quantum mechanical models. We denote the eigenstates of by . We denote the diagonal components of the density matrices for the canonical distributions for and in this basis as:
and
where the are the eigenvalues of
We assume again that the averages of H and in the canonical ensemble defined by are the same:
where
The inequality
still holds as both the and the sum to 1. On the l.h.s. we can replace:
On the right-hand side we can use the inequality
where we have introduced the notation
for the expectation value of the operator Y in the state r. See here for a proof. Taking the logarithm of this inequality gives:
This allows us to write:
The fact that the averages of H and are the same then leads to the same conclusion as in the classical case:
Generalized Helmholtz energy
In the more general case, the mechanical term must be replaced by the product of volume, stress, and an infinitesimal strain:
where is the stress tensor, and is the strain tensor. In the case of linear elastic materials that obey Hooke's law, the stress is related to the strain by
where we are now using Einstein notation for the tensors, in which repeated indices in a product are summed. We may integrate the expression for to obtain the Helmholtz energy:
Application to fundamental equations of state
The Helmholtz free energy function for a pure substance (together with its partial derivatives) can be used to determine all other thermodynamic properties for the substance. See, for example, the equations of state for water, as given by the IAPWS in their IAPWS-95 release.
Application to training auto-encoders
Hinton and Zemel "derive an objective function for training auto-encoder based on the minimum description length (MDL) principle". "The description length of an input vector using a particular code is the sum of the code cost and reconstruction cost. They define this to be the energy of the code. Given an input vector, they define the energy of a code to be the sum of the code cost and the reconstruction cost." The true expected combined cost is
"which has exactly the form of Helmholtz free energy".
| Physical sciences | Thermodynamics | Physics |
255463 | https://en.wikipedia.org/wiki/B%28e%29%20star | B(e) star | A B[e] star, frequently called a B[e]-type star, is a B-type star with distinctive forbidden neutral or low ionisation emission lines in its spectrum. The designation results from combining the spectral class B, the lowercase e denoting emission in the spectral classification system, and the surrounding square brackets signifying forbidden lines. These stars frequently also show strong hydrogen emission lines, but this feature is present in a variety of other stars and is not sufficient to classify a B[e] object. Other observational characteristics include optical linear polarization and often infrared radiation that is much stronger than in ordinary B-class stars, called infrared excess. As the B[e] nature is transient, B[e]-type stars might exhibit a normal B-type spectrum at times, and hitherto normal B-type stars may become B[e]-type stars.
Discovery
Many Be stars were discovered to have spectral peculiarities. One of these peculiarities was the presence of forbidden spectral lines of ionised iron and occasionally other elements.
In 1973 a study of one of these stars, HD 45677 or FS CMa, showed an infrared excess as well as forbidden lines of [OI], [SII], [FeII], [NiII], and many more.
In 1976 a study of Be stars with infrared excesses identified a subset of stars which showed forbidden emission lines from ionised iron and some other elements. These stars were all considered to be distinct from the classical main sequence Be stars, although they appeared to consist of a wide range of different types of star. The term B[e] star was coined to group these stars.
One type of B[e] star was readily identified as being highly luminous supergiants. By 1985, eight dust-shrouded B[e] supergiants were known in the Magellanic Clouds. Others were found to be definitely not supergiants. Some were binaries, others proto-planetary nebulae, and the term "B[e] phenomenon" was used to make it clear that different types of star could produce the same type of spectrum.
Classification
Following the recognition that the B[e] phenomenon could occur in several distinct types of star, four sub-types were named:
B[e] supergiants (sgB[e])
pre-main sequence B[e] stars (HAeB[e]), a subset of the Herbig Ae/Be stars
compact planetary nebulae B[e] stars (cPNB[e])
symbiotic B[e] stars (SymB[e])
Around half of the known B[e] stars could not be placed in any of these groups and were called unclassified B[e] stars (unclB[e]). The unclB[e] stars have since been re-classified as FS CMa stars, a type of variable named for one of the earliest known B[e] stars.
Nature
The forbidden emission, infrared excess, and other features indicative of the B[e] phenomenon, themselves provide strong hints at the nature of the stars. The stars are surrounded by ionised gas which produces intense emission lines in the same way as Be stars. The gas must be sufficiently extended to allow the formation of forbidden lines in the outer low density region, and also for dust to form which produces the infrared excess. These features are common to all the types of B[e] star.
The sgB[e] stars have hot fast winds which produce extended circumstellar material, plus a denser equatorial disc. HAeB[e] are surrounded by the remains of the molecular clouds which are forming the stars. Binary B[e] stars can produce discs of material as it is transferred from one star to another through roche lobe overflow. cPNB[e] are post-AGB stars that have shed their entire atmospheres after reaching the end of their lives as actively fusing stars. The FS CMa stars appear to be binaries with a rapidly rotating mass-losing component.
| Physical sciences | Stellar astronomy | Astronomy |
255468 | https://en.wikipedia.org/wiki/Excretion | Excretion | Excretion is elimination of metabolic waste, which is an essential process in all organisms. In vertebrates, this is primarily carried out by the lungs, kidneys, and skin. This is in contrast with secretion, where the substance may have specific tasks after leaving the cell. For example, placental mammals expel urine from the bladder through the urethra, which is part of the excretory system. Unicellular organisms discharge waste products directly through the surface of the cell.
During life activities such as cellular respiration, several chemical reactions take place in the body. These are known as metabolism. These chemical reactions produce waste products such as carbon dioxide, water, salts, urea and uric acid. Accumulation of these wastes beyond a level inside the body is harmful to the body. The excretory organs remove these wastes. This process of removal of metabolic waste from the body is known as excretion.
Processes across various types of life
Plants
Green plants excrete carbon dioxide and water as respiratory products. In green plants, the carbon dioxide released during respiration gets used during photosynthesis. Oxygen is a byproduct generated during photosynthesis, and exits through stomata, root cell walls, and other routes. Plants can get rid of excess water by transpiration and guttation. It has been shown that the leaf acts as an 'excretophore' and, in addition to being a primary organ of photosynthesis, is also used as a method of excreting toxic wastes via diffusion. Other waste materials that are exuded by some plants — resin, saps, latex, etc. are forced from the interior of the plant by hydrostatic pressures inside the plant and by absorptive forces of plant cells. These latter processes do not need added energy, they act passively. However, during the pre-abscission phase, the metabolic levels of a leaf are high. Plants also excrete some waste substances into the soil around them.
Animals
In animals, the main excretory products are carbon dioxide, ammonia (in ammoniotelics), urea (in ureotelics), uric acid (in uricotelics), guanine (in Arachnida), and creatine. The liver and kidneys clear many substances from the blood (for example, in renal excretion), and the cleared substances are then excreted from the body in the urine and feces.
Aquatic animals usually excrete ammonia directly into the external environment, as this compound has high solubility and there is ample water available for dilution. In terrestrial animals, ammonia-like compounds are converted into other nitrogenous materials, i.e. urea, that are less harmful as there is less water in the environment and ammonia itself is toxic. This process is called detoxification.
Birds
Birds excrete their nitrogenous wastes as uric acid in the form of a paste. Although this process is metabolically more expensive, it allows more efficient water retention and it can be stored more easily in the egg. Many avian species, especially seabirds, can also excrete salt via specialized nasal salt glands, the saline solution leaving through nostrils in the beak.
Insects
In insects, a system involving Malpighian tubules is used to excrete metabolic waste. Metabolic waste diffuses or is actively transported into the tubule, which transports the wastes to the intestines. The metabolic waste is then released from the body along with fecal matter.
The excreted material may be called ejecta. In pathology the word ejecta is more commonly used.
| Biology and health sciences | Basics_3 | null |
2566958 | https://en.wikipedia.org/wiki/Gala%20%28apple%29 | Gala (apple) | Gala is an apple cultivar with a sweet, mild flavor, a crisp but not hard texture, and a striped or mottled orange or reddish appearance. Originating from New Zealand in the 1930s, similar to most named apples, it is clonally propagated. In 2018, it surpassed Red Delicious as the apple cultivar with the highest production in the United States, according to the US Apple Association. It was the first time in over 50 years that any cultivar was produced more than Red Delicious.
Appearance and flavor
Gala apples are non-uniform in color, usually vertically striped or mottled, with overall orange color. They are sweet, fine textured, and aromatic, and in addition to being eaten raw and cooked are especially suitable for creating sauces.
Density 0.86 g/cc
Sugar 13.5%
Acidity 4.2 grams/ litre
Vitamin C 0–5 mg / 100 gram
History
The first Gala apple tree was one of many seedlings resulting from a cross between a Golden Delicious and a Kidd's Orange Red planted in Greytown, Wairarapa, New Zealand in the 1930s by orchardist J.H. Kidd. Selected in 1939, introduced in 1960. Donald W. McKenzie, an employee of Stark Bros Nursery, obtained a US plant patent for the cultivar on October 15, 1974. It is a relatively new introduction to the UK, first planted in commercial volumes during the 1980s. The variety now represents about 20% of the total volume of the commercial production of eating apples grown in the UK, often replacing Cox's Orange Pippin.
Sports (mutations)
Many sports of Gala have been selected, mostly for increased red color, including the popular Royal Gala. The original cultivar produced fruit with orange stripes and a partial orange blush over a yellow background. Since then, several un-patented sports have been recognized. Additionally, more than twenty sports have received US plant patents:
Unpatented varieties
Descendant cultivars
Season
Gala apples are grown from May through September in the northern hemisphere, but, like most apples, are available almost all year through the use of cold storage and controlled atmosphere storage. Australian Gala are available from late January. California fruit is available until October.
While the season usually lasts only 9 or 10 months, they are able to last all year round. However, due to some apples continuing to be grown in some orchards, and the fact that they can be refrigerated for some months leads to the availability of the Gala apple year-round in some Australian markets. These usually taste different (slightly less sweet) from those in season. The UK season begins in late summer (August). Storage makes the UK fruit available nearly year-round as with fruit from other origins.
Royal Gala sport
Royal Gala is a Gala sport, patented by Stark in 1977, which produces redder fruits than the original cultivar. It is a pink-red dessert apple and is therefore usually eaten fresh. Royal Galas are usually harvested in early to late February in the southern hemisphere. In New Zealand, the pinker original Gala has almost disappeared as a commercial apple in favor of the darker-skinned Royal Gala.
| Biology and health sciences | Pomes | Plants |
2567707 | https://en.wikipedia.org/wiki/Formal%20specification | Formal specification | In computer science, formal specifications are mathematically based techniques whose purpose is to help with the implementation of systems and software. They are used to describe a system, to analyze its behavior, and to aid in its design by verifying key properties of interest through rigorous and effective reasoning tools. These specifications are formal in the sense that they have a syntax, their semantics fall within one domain, and they are able to be used to infer useful information.
Motivation
In each passing decade, computer systems have become increasingly more powerful and, as a result, they have become more impactful to society. Because of this, better techniques are needed to assist in the design and implementation of reliable software. Established engineering disciplines use mathematical analysis as the foundation of creating and validating product design. Formal specifications are one such way to achieve this in software engineering reliability as once predicted. Other methods such as testing are more commonly used to enhance code quality.
Uses
Given such a specification, it is possible to use formal verification techniques to demonstrate that a system design is correct with respect to its specification. This allows incorrect system designs to be revised before any major investments have been made into an actual implementation. Another approach is to use probably correct refinement steps to transform a specification into a design, which is ultimately transformed into an implementation that is correct by construction.
It is important to note that a formal specification is not an implementation, but rather it may be used to develop an implementation. Formal specifications describe what a system should do, not how the system should do it.
A good specification must have some of the following attributes: adequate, internally consistent, unambiguous, complete, satisfied, minimal.
A good specification will have:
Constructability, manageability and evolvability
Usability
Communicability
Powerful and efficient analysis
One of the main reasons there is interest in formal specifications is that they will provide an ability to perform proofs on software implementations. These proofs may be used to validate a specification, verify correctness of design, or to prove that a program satisfies a specification.
Limitations
A design (or implementation) cannot ever be declared “correct” on its own. It can only ever be “correct with respect to a given specification”. Whether the formal specification correctly describes the problem to be solved is a separate issue. It is also a difficult issue to address since it ultimately concerns the problem constructing abstracted formal representations of an informal concrete problem domain, and such an abstraction step is not amenable to formal proof. However, it is possible to validate a specification by proving “challenge” theorems concerning properties that the specification is expected to exhibit. If correct, these theorems reinforce the specifier's understanding of the specification and its relationship with the underlying problem domain. If not, the specification probably needs to be changed to better reflect the domain understanding of those involved with producing (and implementing) the specification.
Formal methods of software development are not widely used in industry. Most companies do not consider it cost-effective to apply them in their software development processes. This may be for a variety of reasons, some of which are:
Time
High initial start up cost with low measurable returns
Flexibility
A lot of software companies use agile methodologies that focus on flexibility. Doing a formal specification of the whole system up front is often perceived as being the opposite of flexible. However, there is some research into the benefits of using formal specifications with "agile" development
Complexity
They require a high level of mathematical expertise and the analytical skills to understand and apply them effectively
A solution to this would be to develop tools and models that allow for these techniques to be implemented but hide the underlying mathematics
Limited scope
They do not capture properties of interest for all stakeholders in the project
They do not do a good job of specifying user interfaces and user interaction
Not cost-effective
This is not entirely true, by limiting their use to only core parts of critical systems they have shown to be cost-effective
Other limitations:
Isolation
Low-level ontologies
Poor guidance
Poor separation of concerns
Poor tool feedback
Paradigms
Formal specification techniques have existed in various domains and on various scales for quite some time. Implementations of formal specifications will differ depending on what kind of system they are attempting to model, how they are applied and at what point in the software life cycle they have been introduced. These types of models can be categorized into the following specification paradigms:
History-based specification
behavior based on system histories
assertions are interpreted over time
State-based specification
behavior based on system states
series of sequential steps, (e.g. a financial transaction)
languages such as Z, VDM or B rely on this paradigm
Transition-based specification
behavior based on transitions from state-to-state of the system
best used with a reactive system
languages such as Statecharts, PROMELA, STeP-SPL, RSML or SCR rely on this paradigm
Functional specification
specify a system as a structure of mathematical functions
OBJ, ASL, PLUSS, LARCH, HOL or PVS rely on this paradigm
Operational Specification
early languages such as Paisley, GIST, Petri nets or process algebras rely on this paradigm
Multi-paradigm languages
FizzBee is a multi-paradigm specification language that allows for transition/action based specification, behavioral specifications with non-atomic transitions and also has actor model.
In addition to the above paradigms, there are ways to apply certain heuristics to help improve the creation of these specifications. The paper referenced here best discusses heuristics to use when designing a specification. They do so by applying a divide-and-conquer approach.
Software tools
The Z notation is an example of a leading formal specification language. Others include the Specification Language (VDM-SL) of the Vienna Development Method and the Abstract Machine Notation (AMN) of the B-Method. In the Web services area, formal specification is often used to describe non-functional properties (Web services quality of service).
Some tools are:
Algebraic
Larch
OBJ
Lotos
Model-based
Z
B
VDM
CSP
Petri Nets
TLA+
FizzBee
| Technology | Software development: General | null |
2571116 | https://en.wikipedia.org/wiki/Arthropathy | Arthropathy | An arthropathy is a disease of a joint.
Types
Arthritis is a form of arthropathy that involves inflammation of one or more joints, while the term arthropathy may be used regardless of whether there is inflammation or not.
Joint diseases can be classified as follows:
Arthritis
Infectious arthritis
Septic arthritis (infectious)
Tuberculosis arthritis
Reactive arthritis (indirectly)
Noninfectious arthritis
Seronegative spondyloarthropathy:
Psoriatic arthritis
Ankylosing spondylitis
Rheumatoid arthritis: Felty's syndrome
Juvenile idiopathic arthritis
Adult-onset Still's disease
Crystal arthropathy
Gout
Chondrocalcinosis
Osteoarthritis
Hemarthrosis (joint bleeding)
Synovitis is the medical term for inflammation of the synovial membrane.
Joint dislocation
With arthropathy in the name
Reactive arthropathy (M02-M03) is caused by an infection, but not a direct infection of the synovial space. ( | Biology and health sciences | Specific diseases | Health |
22241037 | https://en.wikipedia.org/wiki/Evolution%20of%20brachiopods | Evolution of brachiopods | The origin of the brachiopods is uncertain; they either arose from reduction of a multi-plated tubular organism, or from the folding of a slug-like organism with a protective shell on either end. Since their Cambrian origin, the phylum rose to a Palaeozoic dominance, but dwindled during the Mesozoic.
Origins
Brachiopod fold hypothesis
The long-standing hypothesis of brachiopod origins, which has recently come under fire, suggests that the brachiopods arose by the folding of a Halkieria-like organism, which bore two protective shells at either end of a scaled body. The tannuolinids were thought to represent an intermediate form, although the fact that they do not, as thought, possess a scleritome means that this is now considered unlikely. Under this hypothesis, the Phoronid worms share a similar evolutionary history; molecular data also appear to indicate their membership of Brachiopoda.
Under the Brachiopod Fold Hypothesis, the "dorsal" and "ventral" valves would in fact represent an anterior and posterior shell. This would make the axes of symmetry consistent with that of other bilaterian phyla and appears to be consistent with the embryological development, in which the body axis folds to bring the shells from the dorsal surface to their mature position. Further support has been identified from the gene expression pattern during development, but on balance, developmental evidence speaks against the BFH.
More recent developmental studies have cast doubt on the BFH. Most significantly, the dorsal and ventral valves have significantly different origins; the dorsal (branchial) valve is secreted by dorsal epithelia, whereas the ventral (pedicle) valve corresponds to the cuticle of the pedicle, which becomes mineralized during development. Moreover, the dorsal and ventral valves of Lingula do not display the Hox gene expression patterns that would be expected if they were ancestrally 'anterior' and 'posterior'.
Tommotiids
The 'tommotiids' are an informal group of animals thought to be lophotrochozoans. Their remains are usually found as microfossils, entombed in carbonate as phosphatic sclerites(armor plates). While the sclerites are disarticulated in their fossil state, in life a huge number of them would have articulated and attached onto a soft-bodied animal. The taxonomical affinities of such animal have long been uncertain - they had been compared to other fossils known from armor plates/scales, such as Halkieria and the machaeridian worms.
Continuing research in the current century has brought on a new exciting perspective on the affinities of tommotiids: they are now being regarded as stem-group brachiopods. One crucial fossil linking the tommotiids with brachiopods is Micrina. Analysis on the microscopic inner structure of the phosphatic shell has shown similarities to the organophosphatic brachiopods, one of them being tubes - that must have housed setae in life - perforating the shell layers. Setigerous tubes have also been found in early brachiopods, like the Paterinates for example. A later publication (Holmer et al. 2008) asserted that Micrina was a bivalved animal not unlike a brachiopod, having only two armor plates in life. Tommotiid sclerites can be classified by their shape, and most had two types of them: the sellate sclerite and the mitral sclerite. In this model Micrina had one of each. The sellate and mitral sclerites of tommotiids would end up becoming dorsal(brachial) and ventral(pedicle) valves respectively.
Another crucial find would be the discovery of (partially) articulated tommotiids. The first of these is Eccentrotheca, and the second Paterimitra. Unlike the traditional view of them being slug-like animals comparable to Halkieria, the articulated exoskeleton suggest that they were sessile filter feeders, just like the brachiopods and their sister-group phoronids. Their shell microstructure, again, show similarity to the Paterinate brachiopods, especially in their primary mineralised layer.
Appearance of the brachiopod crown-group
The earliest unequivocal brachiopod fossils appeared in the early Cambrian Period. The oldest known brachiopod is Aldanotreta sunnaginensis from the lowest Tommotian Stage, early Cambrian of the Siberia was confidently identified as a paterinid linguliforms.
The question of Paterinata
The brachiopod class Paterinata is an organophosphatic-shelled group that includes some of the oldest brachiopods known. They are usually considered as members of Linguliformea, being sister-groups with the similarly organophosphatic lingulates. However, paterinates possess a number of traits that resemble the 'articulate' brachiopods more than lingulates. Their adductor muscle scars are oriented postero-medially like the rhynchonelliforms. They have a strophic(straight) hinge line, which resemble early articulate groups like the orthids. Their mantle canal system houses gonads(like the craniiforms) and have exclusively marginal vascula terminalia. This mosaic of traits lead to a repeated suggestion of the possibility that paterinates(or at least a few of them) could be very early diverging members separate from the lingulates. Their shell microstructure also seems to be closer to the stem-brachiopod tommotiids, though this is something that was brought up later down the line.
Evolutionary history
Palaeozoic dominance
Brachiopods are extremely common fossils throughout the Palaeozoic.
During the Ordovician and Silurian periods, brachiopods became adapted to life in most marine environments and became particularly numerous in shallow water habitats, in some cases forming whole banks in much the same way as bivalves (such as mussels) do today. In some places, large sections of limestone strata and reef deposits are composed largely of their shells.
The major shift came with the Permian extinction, as a result of the Mesozoic marine revolution. Before the extinction event, brachiopods were more numerous and diverse than bivalve mollusks. Afterwards, in the Mesozoic, their diversity and numbers were drastically reduced and they were largely replaced by bivalve molluscs. Molluscs continue to dominate today, and the remaining orders of brachiopods survive largely in fringe environments.
Mesozoic decline
Throughout their long geological history, the brachiopods have gone through several major proliferations and diversifications, and have also suffered from major extinctions as well.
It has been suggested that the slow decline of the brachiopods over the last 100 million years or so is a direct result of the rise in diversity of filter-feeding bivalves, which have ousted the brachiopods from their former habitats; however, the bivalves have undergone a steady rise in diversity from the mid-Paleozoic onwards, and their abundance is unrelated to that of the brachiopods; further, many bivalves occupy niches (e.g. burrowing) which brachiopods never inhabited.
Alternative possibilities for their demise include the increasing disturbance of sediments by roving deposit feeders (including many burrowing bivalves); the increased intensity and variety of shell-crushing predation; or even chance demise – they were hard hit in the End-Permian extinction and may simply never have recovered.
| Biology and health sciences | Basics_4 | Biology |
22249817 | https://en.wikipedia.org/wiki/Genetic%20admixture | Genetic admixture | Genetic admixture occurs when previously isolated populations interbreed resulting in a population that is descended from multiple sources. It can occur between species, such as with hybrids, or within species, such as when geographically distant individuals migrate to new regions. It results in gene pool that is a mix of the source populations.
Examples
Climatic cycles facilitate genetic admixture in cold periods and genetic diversification in warm periods.
Natural flooding can cause genetic admixture within populations of migrating fish species.
Genetic admixture may have an important role for the success of populations that colonise a new area and interbreed with individuals of native populations.
Mapping
Admixture mapping is a method of gene mapping that uses a population of mixed ancestry (an admixed population) to find the genetic loci that contribute to differences in diseases or other phenotypes found between the different ancestral populations. The method is best applied to populations with recent admixture from two populations that were previously genetically isolated. The method attempts to correlate the degree of ancestry near a genetic locus with the phenotype or disease of interest. Genetic markers that differ in frequency between the ancestral populations are needed across the genome.
Admixture mapping is based on the assumption that differences in disease rates or phenotypes are due in part to differences in the frequencies of disease-causing or phenotype-causing genetic variants between populations. In an admixed population, these causal variants occur more frequently on chromosomal segments inherited from one or another ancestral population. The first admixture scans were published in 2005 and since then genetic contributors to a variety of disease and trait differences have been mapped. By 2010, high-density mapping panels had been constructed for African Americans, Latino/Hispanics, and Uyghurs.
| Biology and health sciences | Basics_4 | Biology |
4736453 | https://en.wikipedia.org/wiki/Vial | Vial | A vial (also known as a phial or flacon) is a small glass or plastic vessel or bottle, often used to store medication in the form of liquids, powders, or capsules. They can also be used as scientific sample vessels; for instance, in autosampler devices in analytical chromatography. Vial-like glass containers date back to classical antiquity; modern vials are often made of plastics such as polypropylene. There are different types of vials such as a single dose vial and multi-dose vials often used for medications. The single dose vial is only used once whereas a multi-dose vial can be used more than once. The CDC sets specific guidelines on multi-dose vials.
History and etymology
A vial can be tubular, or have a bottle-like shape with a neck. The volume defined by the neck is known as the headspace.
The English word "vial" is derived from the Greek phiale, meaning "a broad flat container". Comparable terms include the Latin phiala, Late Latin fiola and Middle English fiole and viole.
Modern vials
Modern vials are often made out of glass or plastic. They are often used as storage for small quantities of liquid used in medical or molecular biology applications.
There are several different types of commonly used closure systems. For glass vials, options include screw vials (closed with a screw cap or dropper/pipette), lip vials (closed with a cork or plastic stopper) and crimp vials (closed with a rubber stopper and a metal cap). Plastic vials, which can be moulded in plastic, can have other closure systems, such as 'hinge caps' which snap shut when pressed. These are sometimes called flip-tops or snap caps.
The bottom of a vial is often flat, unlike test tubes, which have usually a rounded bottom, but this is often not the case for small hinge-cap or snap-top vials. The small bottle-shaped vials typically used in laboratories are also known as bijou or McCartney's bottles. The bijou bottle tends to be smaller, often with a volume of around 10milliliters.
| Technology | Containers | null |
19584690 | https://en.wikipedia.org/wiki/Wood%20ash | Wood ash | Wood ash is the powdery residue remaining after the combustion of wood, such as burning wood in a fireplace, bonfire, or an industrial power plant. It is largely composed of calcium compounds, along with other non-combustible trace elements present in the wood, and has been used for many purposes throughout history.
Composition
Variability in assessment
A comprehensive set of analyses of wood ash composition from many tree species has been carried out by Emil Wolff, among others. Several factors have a major impact on the composition:
Fine ash: Some studies include the solids escaping via the flue during combustion, while others do not.
Temperature of combustion. Ash content yield decreases with increasing combustion temperature which produces two direct effects:
Dissociation: Conversion of carbonates, sulfides, etc., to oxides results in no carbon, sulfur, carbonates, or sulfides. Some metallic oxides (e.g. mercuric oxide) even dissociate to their elemental state and/or vaporize completely at wood fire temperatures (.)
Volatilization: In studies in which the escaped ash is not measured, some combustion products may not be present at all. Arsenic for example is not volatile, but arsenic trioxide is (boiling point: ).
Experimental process: If the ashes are exposed to the environment between combustion and the analysis, oxides may convert back to carbonates by reacting with carbon dioxide in the air. Hygroscopic substances meanwhile may absorb atmospheric moisture.
Type, age, and growing environment of the wood stock affect the composition of the wood (e.g. hardwood and softwood), and thus the ash. Hardwoods usually produce more ash than softwoods with bark and leaves producing more than internal parts of the trunk.
Measurements
The burning of wood results in about 6–10% ashes on average. The residue ash of 0.43 and 1.82 percent of the original mass of burned wood (assuming dry basis, meaning that H2O is driven off) is produced for certain woods if it is pyrolized until all volatiles disappear and it is burned at for 8 hours. Also the conditions of the combustion affect the composition and amount of the residue ash, thus higher temperature will reduce the ash yield.
Elemental analysis
Typically, wood ash contains the following major elements:
Carbon (C) — 5–30%.
Calcium (Ca) — 7–33%
Potassium (K) — 3–10%
Magnesium (Mg) — 1–2%
Manganese (Mn) — 0.3–1.3%
Phosphorus (P) — 0.3–1.4%
Sodium (Na) — 0.2–0.5%.
Chemical compounds
As the wood burns, it produces different compounds depending on the temperature used. Some studies cite calcium carbonate () as the major constituent, others find no carbonate at all but calcium oxide () instead. The latter is produced at higher temperatures (see calcination). The equilibrium reaction CaCO3 → CO2 + CaO has its equilibrium shifted leftward at and high partial pressure (such as in a wood fire) but shifted rightward at or when partial pressure is reduced.
Much of wood ash contains calcium carbonate (CaCO3) as its major component, representing 25% or even 45% of total ash weight. At CaCO3 and K2CO3 were identified in one case. Less than 10% is potash, and less than 1% is phosphate.
Trace elements
There are trace elements of iron (Fe), manganese (Mn), zinc (Zn), copper (Cu) and some heavy metals. Their concentrations in ash vary due to combustion temperature. Decomposition of carbonates and the volatilization of potassium (K), sulfur (S), and trace amounts of copper (Cu) and boron (B) may result from increased temperature. The study has found that at raised temperature K, S, B, sodium (Na) and copper (Cu) decreased, whereas Mg, P, Mn, Al, Fe, and Si did not change relative to calcium (Ca). All of these trace elements are, however, present in the form of oxides at higher temperature of combustion. Some elements in wood ash (all fractions given in mass of elements per mass of ash) include:
Fe 1.6-55 ‰
Si 6-170 ‰
Al 1.2-45 ‰
Mn 1-20 ‰
As 0.6-50 ppm
Cd 0.18-60 ppm
Pb 2-500 ppm
Cr 12-280 ppm
Ni 10-140 ppm
V 1.8-120 ppm
Fuels
One study has determined that a slowly burning wood ( ) emissions typically include 16 alkenes, 5 alkadienes, 5 alkynes and several alkanes and arenes in proportions. Ethene, acetylene and benzene were a major part at efficient combustion. Proportion of C3-C7 alkenes were found to be higher for smouldering. Benzene and 1,3-butadiene constituted ~10–20% and ~1–2% by mass of total non-methane hydrocarbons.
Uses
Fertilizers
Wood ash can be used as a fertilizer used to enrich agricultural soil nutrition. In this role, wood ash serves as a source of potassium and calcium carbonate, the latter acting as a liming agent to neutralize acidic soils.
Wood ash can also be used as an amendment for organic hydroponic solutions, generally replacing inorganic compounds containing calcium, potassium, magnesium and phosphorus.
Composts
Wood ash is commonly disposed of in landfills, but with rising disposal costs, ecologically friendly alternatives, such as serving as compost for agricultural and forestry applications, are becoming more popular. Because wood ash has a high char content, it can be used as an odor control agent, especially in composting operations.
Pottery
Wood ash has a very long history of being used in ceramic glazes, particularly in the Chinese, Japanese and Korean traditions, though now used by many craft potters. It acts as a flux, reducing the melting point of the glaze.
Soaps
For thousands of years, plant or wood ash was leached with water, to yield an impure solution of potassium carbonate. This product could be mixed with oils or fats to produce a soft "soap" or soap like-product, as was done in ancient Sumeria, Europe, and Egypt. However only certain types of plants could produce a soap that actually lathered. Later, medieval European soapmakers treated the wood ash solution with slaked lime, which contains calcium hydroxide, to get a hydroxide-rich solution for soapmaking. However it was not until the invention of the Leblanc process that high quality sodium hydroxide could be mass produced, rendering obsolete the earlier forms of soap using crude wood or plant ash. This was a revolutionary discovery that facilitated the modern soapmaking industry.
Bio-leaching
The ectomycorrhizal fungi Suillus granulatus and Paxillus involutus can release elements from wood ash.
Food preparation
Wood ash is sometimes used in the process of nixtamalization, where certain types of corn (typically maize or sorghum) are soaked and cooked in an alkali solution to improve nutritional content and decrease risk of mycotoxins. The alkali solution has historically been made from wood ash lye.
Nixtamalization was originally practiced in Mesoamerica, from which it spread northwards through various indigenous tribes of North America. In eastern North America, nixtamalized corn was traditionally eaten in porridges and stews, a dish that Europeans would call hominy. Wood ash is also used as a preservative for some kinds of cheese, such as Morbier and Humboldt Fog.
An early leavened bread was baked as early as 6000 BC by the Sumerians by placing the bread on heated stones and covering it with hot ash. The minerals in the wood ash could have supplemented the nutritional content of the dough as it was baked. In present day, the amount of wood ash content in bread flour, as measured by the Chopin alveograph, is strictly regulated by France.
| Physical sciences | Salts and ions: General | Chemistry |
15648231 | https://en.wikipedia.org/wiki/Facial%20skeleton | Facial skeleton | The facial skeleton comprises the facial bones that may attach to build a portion of the skull. The remainder of the skull is the neurocranium.
In human anatomy and development, the facial skeleton is sometimes called the membranous viscerocranium, which comprises the mandible and dermatocranial elements that are not part of the braincase.
Structure
In the human skull, the facial skeleton consists of fourteen bones in the face:
Inferior turbinal (2)
Lacrimal bones (2)
Mandible
Maxilla (2)
Nasal bones (2)
Palatine bones (2)
Vomer
Zygomatic bones (2)
Variations
Elements of the cartilaginous viscerocranium (i.e., splanchnocranial elements), such as the hyoid bone, are sometimes considered part of the facial skeleton. The ethmoid bone (or a part of it) and also the sphenoid bone are sometimes included, but otherwise considered part of the neurocranium. Because the maxillary bones are fused, they are often collectively listed as only one bone. The mandible is generally considered separately from the cranium.
Development
The facial skeleton is composed of dermal bone and derived from the neural crest cells (also responsible for the development of the neurocranium, teeth and adrenal medulla) or from the sclerotome, which derives from the somite block of the mesoderm. As with the neurocranium, in Chondricthyes and other cartilaginous vertebrates, they are not replaced via endochondral ossification.
Variation in craniofacial form between humans is largely due to differing patterns of biological inheritance. Cross-analysis of osteological variables and genome-wide SNPs has identified specific genes that control this craniofacial development. Of these genes, DCHS2, RUNX2, GLI3, PAX1 and PAX3 were found to determine nasal morphology, whereas EDAR impacts chin protrusion.
Additional images
| Biology and health sciences | Human anatomy | Health |
15652764 | https://en.wikipedia.org/wiki/Non-linear%20least%20squares | Non-linear least squares | Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().
Theory
Consider a set of data points, and a curve (model function) that in addition to the variable also depends on parameters, with It is desired to find the vector of parameters such that the curve fits best the given data in the least squares sense, that is, the sum of squares
is minimized, where the residuals (in-sample prediction errors) are given by
for
The minimum value of occurs when the gradient is zero. Since the model contains parameters there are gradient equations:
In a nonlinear system, the derivatives are functions of both the independent variable and the parameters, so in general these gradient equations do not have a closed solution. Instead, initial values must be chosen for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation,
Here, is an iteration number and the vector of increments, is known as the shift vector. At each iteration the model is linearized by approximation to a first-order Taylor polynomial expansion about
The Jacobian matrix, , is a function of constants, the independent variable and the parameters, so it changes from one iteration to the next. Thus, in terms of the linearized model,
and the residuals are given by
Substituting these expressions into the gradient equations, they become
which, on rearrangement, become simultaneous linear equations, the normal equations
The normal equations are written in matrix notation as
These equations form the basis for the Gauss–Newton algorithm for a non-linear least squares problem.
Note the sign convention in the definition of the Jacobian matrix in terms of the derivatives. Formulas linear in may appear with factor of in other articles or the literature.
Extension by weights
When the observations are not equally reliable, a weighted sum of squares may be minimized,
Each element of the diagonal weight matrix should, ideally, be equal to the reciprocal of the error variance of the measurement.
The normal equations are then, more generally,
Geometrical interpretation
In linear least squares the objective function, , is a quadratic function of the parameters.
When there is only one parameter the graph of with respect to that parameter will be a parabola. With two or more parameters the contours of with respect to any pair of parameters will be concentric ellipses (assuming that the normal equations matrix is positive definite). The minimum parameter values are to be found at the centre of the ellipses. The geometry of the general objective function can be described as paraboloid elliptical.
In NLLSQ the objective function is quadratic with respect to the parameters only in a region close to its minimum value, where the truncated Taylor series is a good approximation to the model.
The more the parameter values differ from their optimal values, the more the contours deviate from elliptical shape. A consequence of this is that initial parameter estimates should be as close as practicable to their (unknown!) optimal values. It also explains how divergence can come about as the Gauss–Newton algorithm is convergent only when the objective function is approximately quadratic in the parameters.
Computation
Initial parameter estimates
Some problems of ill-conditioning and divergence can be corrected by finding initial parameter estimates that are near to the optimal values. A good way to do this is by computer simulation. Both the observed and calculated data are displayed on a screen. The parameters of the model are adjusted by hand until the agreement between observed and calculated data is reasonably good. Although this will be a subjective judgment, it is sufficient to find a good starting point for the non-linear refinement. Initial parameter estimates can be created using transformations or linearizations. Better still evolutionary algorithms such as the Stochastic Funnel Algorithm can lead to the convex basin of attraction that surrounds the optimal parameter estimates. Hybrid algorithms that use randomization and elitism, followed by Newton methods have been shown to be useful and computationally efficient.
Solution
Any method among the ones described below can be applied to find a solution.
Convergence criteria
The common sense criterion for convergence is that the sum of squares does not increase from one iteration to the next. However this criterion is often difficult to implement in practice, for various reasons. A useful convergence criterion is
The value 0.0001 is somewhat arbitrary and may need to be changed. In particular it may need to be increased when experimental errors are large. An alternative criterion is
Again, the numerical value is somewhat arbitrary; 0.001 is equivalent to specifying that each parameter should be refined to 0.1% precision. This is reasonable when it is less than the largest relative standard deviation on the parameters.
Calculation of the Jacobian by numerical approximation
There are models for which it is either very difficult or even impossible to derive analytical expressions for the elements of the Jacobian. Then, the numerical approximation
is obtained by calculation of for and . The increment,, size should be chosen so the numerical derivative is not subject to approximation error by being too large, or round-off error by being too small.
Parameter errors, confidence limits, residuals etc.
Some information is given in the corresponding section on the Weighted least squares page.
Multiple minima
Multiple minima can occur in a variety of circumstances some of which are:
A parameter is raised to a power of two or more. For example, when fitting data to a Lorentzian curve where is the height, is the position and is the half-width at half height, there are two solutions for the half-width, and which give the same optimal value for the objective function.
Two parameters can be interchanged without changing the value of the model. A simple example is when the model contains the product of two parameters, since will give the same value as .
A parameter is in a trigonometric function, such as , which has identical values at . See Levenberg–Marquardt algorithm for an example.
Not all multiple minima have equal values of the objective function. False minima, also known as local minima, occur when the objective function value is greater than its value at the so-called global minimum. To be certain that the minimum found is the global minimum, the refinement should be started with widely differing initial values of the parameters. When the same minimum is found regardless of starting point, it is likely to be the global minimum.
When multiple minima exist there is an important consequence: the objective function will have a maximum value somewhere between two minima. The normal equations matrix is not positive definite at a maximum in the objective function, as the gradient is zero and no unique direction of descent exists. Refinement from a point (a set of parameter values) close to a maximum will be ill-conditioned and should be avoided as a starting point. For example, when fitting a Lorentzian the normal equations matrix is not positive definite when the half-width of the band is zero.
Transformation to a linear model
A non-linear model can sometimes be transformed into a linear one. Such an approximation is, for instance, often applicable in the vicinity of the best estimator, and it is one of the basic assumption in most iterative minimization algorithms.
When a linear approximation is valid, the model can directly be used for inference with a generalized least squares, where the equations of the Linear Template Fit apply.
Another example of a linear approximation would be when the model is a simple exponential function,
which can be transformed into a linear model by taking logarithms.
Graphically this corresponds to working on a semi-log plot. The sum of squares becomes
This procedure should be avoided unless the errors are multiplicative and log-normally distributed because it can give misleading results. This comes from the fact that whatever the experimental errors on might be, the errors on are different. Therefore, when the transformed sum of squares is minimized, different results will be obtained both for the parameter values and their calculated standard deviations. However, with multiplicative errors that are log-normally distributed, this procedure gives unbiased and consistent parameter estimates.
Another example is furnished by Michaelis–Menten kinetics, used to determine two parameters and :
The Lineweaver–Burk plot
of against is linear in the parameters and but very sensitive to data error and strongly biased toward fitting the data in a particular range of the independent variable .
Algorithms
Gauss–Newton method
The normal equations
may be solved for by Cholesky decomposition, as described in linear least squares. The parameters are updated iteratively
where k is an iteration number. While this method may be adequate for simple models, it will fail if divergence occurs. Therefore, protection against divergence is essential.
Shift-cutting
If divergence occurs, a simple expedient is to reduce the length of the shift vector, , by a fraction, f
For example, the length of the shift vector may be successively halved until the new value of the objective function is less than its value at the last iteration. The fraction, f could be optimized by a line search. As each trial value of f requires the objective function to be re-calculated it is not worth optimizing its value too stringently.
When using shift-cutting, the direction of the shift vector remains unchanged. This limits the applicability of the method to situations where the direction of the shift vector is not very different from what it would be if the objective function were approximately quadratic in the parameters,
Marquardt parameter
If divergence occurs and the direction of the shift vector is so far from its "ideal" direction that shift-cutting is not very effective, that is, the fraction, f required to avoid divergence is very small, the direction must be changed. This can be achieved by using the Marquardt parameter. In this method the normal equations are modified
where is the Marquardt parameter and I is an identity matrix. Increasing the value of has the effect of changing both the direction and the length of the shift vector. The shift vector is rotated towards the direction of steepest descent when
is the steepest descent vector. So, when becomes very large, the shift vector becomes a small fraction of the steepest descent vector.
Various strategies have been proposed for the determination of the Marquardt parameter. As with shift-cutting, it is wasteful to optimize this parameter too stringently. Rather, once a value has been found that brings about a reduction in the value of the objective function, that value of the parameter is carried to the next iteration, reduced if possible, or increased if need be. When reducing the value of the Marquardt parameter, there is a cut-off value below which it is safe to set it to zero, that is, to continue with the unmodified Gauss–Newton method. The cut-off value may be set equal to the smallest singular value of the Jacobian. A bound for this value is given by where is the trace function.
QR decomposition
The minimum in the sum of squares can be found by a method that does not involve forming the normal equations. The residuals with the linearized model can be written as
The Jacobian is subjected to an orthogonal decomposition; the QR decomposition will serve to illustrate the process.
where is an orthogonal matrix and is an matrix which is partitioned into an block, , and a zero block. is upper triangular.
The residual vector is left-multiplied by .
This has no effect on the sum of squares since because Q is orthogonal.
The minimum value of S is attained when the upper block is zero. Therefore, the shift vector is found by solving
These equations are easily solved as R is upper triangular.
Singular value decomposition
A variant of the method of orthogonal decomposition involves singular value decomposition, in which R is diagonalized by further orthogonal transformations.
where is orthogonal, is a diagonal matrix of singular values and is the orthogonal matrix of the eigenvectors of or equivalently the right singular vectors of . In this case the shift vector is given by
The relative simplicity of this expression is very useful in theoretical analysis of non-linear least squares. The application of singular value decomposition is discussed in detail in Lawson and Hanson.
Gradient methods
There are many examples in the scientific literature where different methods have been used for non-linear data-fitting problems.
Inclusion of second derivatives in The Taylor series expansion of the model function. This is Newton's method in optimization. The matrix H is known as the Hessian matrix. Although this model has better convergence properties near to the minimum, it is much worse when the parameters are far from their optimal values. Calculation of the Hessian adds to the complexity of the algorithm. This method is not in general use.
Davidon–Fletcher–Powell method. This method, a form of pseudo-Newton method, is similar to the one above but calculates the Hessian by successive approximation, to avoid having to use analytical expressions for the second derivatives.
Steepest descent. Although a reduction in the sum of squares is guaranteed when the shift vector points in the direction of steepest descent, this method often performs poorly. When the parameter values are far from optimal the direction of the steepest descent vector, which is normal (perpendicular) to the contours of the objective function, is very different from the direction of the Gauss–Newton vector. This makes divergence much more likely, especially as the minimum along the direction of steepest descent may correspond to a small fraction of the length of the steepest descent vector. When the contours of the objective function are very eccentric, due to there being high correlation between parameters, the steepest descent iterations, with shift-cutting, follow a slow, zig-zag trajectory towards the minimum.
Conjugate gradient search. This is an improved steepest descent based method with good theoretical convergence properties, although it can fail on finite-precision digital computers even when used on quadratic problems.
Direct search methods
Direct search methods depend on evaluations of the objective function at a variety of parameter values and do not use derivatives at all. They offer alternatives to the use of numerical derivatives in the Gauss–Newton method and gradient methods.
Alternating variable search. Each parameter is varied in turn by adding a fixed or variable increment to it and retaining the value that brings about a reduction in the sum of squares. The method is simple and effective when the parameters are not highly correlated. It has very poor convergence properties, but may be useful for finding initial parameter estimates.
Nelder–Mead (simplex) search. A simplex in this context is a polytope of n + 1 vertices in n dimensions; a triangle on a plane, a tetrahedron in three-dimensional space and so forth. Each vertex corresponds to a value of the objective function for a particular set of parameters. The shape and size of the simplex is adjusted by varying the parameters in such a way that the value of the objective function at the highest vertex always decreases. Although the sum of squares may initially decrease rapidly, it can converge to a nonstationary point on quasiconvex problems, by an example of M. J. D. Powell.
More detailed descriptions of these, and other, methods are available, in Numerical Recipes, together with computer code in various languages.
| Mathematics | Statistics | null |
462030 | https://en.wikipedia.org/wiki/Alidade | Alidade | An alidade () (archaic forms include alhidade, alhidad, alidad) or a turning board is a device that allows one to sight a distant object and use the line of sight to perform a task. This task can be, for example, to triangulate a scale map on site using a plane table drawing of intersecting lines in the direction of the object from two or more points or to measure the angle and horizontal distance to the object from some reference point's polar measurement. Angles measured can be horizontal, vertical or in any chosen plane.
The alidade sighting ruler was originally a part of many types of scientific and astronomical instrument. At one time, some alidades, particularly using circular graduations as on astrolabes, were also called diopters. With modern technology, the name is applied to complete instruments such as the 'plane table alidade'.
Origins
The word in Arabic (, , ), signifies the same device. In Greek and Latin, it is respectively called , "dioptra", and , "fiducial line".
The earliest alidades consisted of a bar, rod or similar component with a vane on each end. Each vane (also called a pinnule or pinule) has a hole, slot or other indicator through which one can view a distant object. There may also be a pointer or pointers on the alidade to indicate a position on a scale. Alidades have been made of wood, ivory, brass and other materials.
Examples of old alidade types
The figure on the left displays drawings that attempt to show the general forms of various alidades that can be found on many antique instruments. Real alidades of these types could be much more decorative, revealing the maker's artistic talents as well as his technical skills. In the terminology of the time, the edge of an alidade at which one reads a scale or draws a line is called a fiducial edge.
Alidade B in the diagram shows a straight, flat bar with a vane at either end. No pointers are used. The vanes are not centred on the bar but offset so that the sight line coincides with the edge of the bar.
The vanes have a rectangular hole in each with a fine wire held vertically in the opening. To use the alidade, the user sights an object and lines it up with the wires in each vane. This type of alidade could be found on a plane table, graphometer or similar instrument.
Alidades A and C are similar to B but have a slit or circular hole without a wire. In the diagram, the openings are exaggerated in size to show the shape; they would be smaller in a real alidade, perhaps 2 mm or so in width. One can look through the openings and line the openings up with the object of interest in the distance. With a small opening, the error in sighting the object is small. However, if a dim object such as a star is observed through a small hole, the image is difficult to see.
This form is shown in the diagram as having pointers. These can be used to read off an angle on a scale that is engraved around the outer edge (or limb) of the instrument. Alidades of this form are found on astrolabes, mariner's astrolabes and similar instruments.
Alidade D has vanes without any openings. In this case, the object is viewed and the alidade is rotated until the two opposite vanes simultaneously eclipse the object. With skill, this sort of alidade can yield very precise measures. In this example, pointers are shown.
Alidade E is a representation of a very interesting design by Johannes Hevelius. Hevelius was following in Tycho Brahe's footsteps and cataloging star positions with high accuracy. He did have access to the telescopic sights that were being used by astronomers in other countries, however, he chose to use naked-eye observations for his positional instruments. Due to the design of his instruments and the alidades, as well as his diligent practices, he was able to yield very precise measures.
Hevelius' design featured a pivot point with a vertical cylinder and a vane at the observer's end. The vane had two narrow slits that were spaced precisely the same distance apart as the diameter of the cylinder (in the diagram, the portion of the vane between the slits is removed for clarity; the left and right edges of the opening represent the slits). If the observer could sight a star on only one side of the cylinder, as seen in F, the alignment was off. By carefully moving the vane so that the star could just barely be seen on either side of the cylinder (G), the alidade was aligned with the position of the star. This could not be used with a closely located object. A star, being so far away as to exhibit no parallax to the naked-eye, would be observable as a point source simultaneously on both sides.
Modern alidade types
The alidade is the part of a theodolite that rotates around the vertical axis, and that bears the horizontal axis around which the telescope (or visor, in early telescope-less instruments) turns up or down.
In a sextant or octant the alidade is the turnable arm carrying a mirror and an index to a graduated circle in a vertical plane. Today it is more commonly called an 'index arm'.
Alidade tables have also long been used in fire towers for sighting the bearing to a forest fire. A topographic map of the local area, with a suitable scale, is oriented, centered and permanently mounted on a leveled circular table surrounded by an arc calibrated to true north of the map and graduated in degrees (and fractions) of arc. Two vertical sight apertures are arranged opposite each other and can be rotated along the graduated arc of the horizontal table. To determine a bearing to a suspected fire, the user looks through the two sights and adjusts them until they are aligned with the source of the smoke (or an observed lightning strike to be monitored for smoke). See Osborne Fire Finder.
| Technology | Surveying tools | null |
462178 | https://en.wikipedia.org/wiki/Polyculture | Polyculture | In agriculture, polyculture is the practice of growing more than one crop species together in the same place at the same time, in contrast to monoculture, which had become the dominant approach in developed countries by 1950. Traditional examples include the intercropping of the Three Sisters, namely maize, beans, and squashes, by indigenous peoples of Central and North America, the rice-fish systems of Asia, and the complex mixed cropping systems of Nigeria.
Polyculture offers multiple advantages, including increasing total yield, as multiple crops can be harvested from the same land, along with reduced risk of crop failure. Resources are used more efficiently, requiring less inputs of fertilizers and pesticides, as interplanted crops suppress weeds, and legumes can fix nitrogen. The increased diversity tends to reduce losses from pests and diseases. Polyculture can yield multiple harvests per year, and can improve the physical, chemical and structural properties of soil, for example as taproots create pores for water and air. Improved soil cover reduces soil drying and erosion. Further, increased diversity of crops can provide people with a healthier diet.
Disadvantages include the skill required to manage polycultures; it can be difficult to mechanize when crops have differing needs for sowing depths, spacings, and times, may need different fertilizers and pesticides, and may be hard to harvest and to separate the crops. Finding suitable plant combinations may be challenging. Competition between species may reduce yields.
Annual polycultures include intercropping, where two or more crops are grown alongside each other; in horticulture, this is called companion planting. A variant is strip cropping where multiple rows of a crop form a strip, beside a strip of another crop. A cover crop involves planting a species that is not a crop, such as grasses and legumes, alongside the crop. The cover plants help reduce soil erosion, suppress weeds, retain water, and fix nitrogen. A living mulch, mainly used in horticulture, involves a second crop used to suppress weeds; a popular choice is marigold, as this has cash value and produces chemicals that repel pests. In mixed cropping, all the seeds are sown together, mimicking natural plant diversity; harvesting is simple, with all the crops being put to the same use.
Perennial polycultures can involve perennial varieties of annual crops, as with rice, sorghum, and pigeon pea; they can be grown alongside legumes such as alfalfa. Rice polycultures often involve animal crops such as fish and ducks. In agroforestry, some of the crops are trees; for example, coffee, which is shade-loving, is traditionally grown under shade trees. The rice-fish systems of Asia produce freshwater fish as well as rice, yielding a valuable extra crop; in Indonesia, a combination of rice, fish, ducks, and water fern produces a resilient and productive permaculture system.
Definitions
Polyculture is the growing of multiple crops together in the same place at the same time. It has traditionally been the most prevalent form of agriculture. Regions where polycultures form a substantial part of agriculture include the Himalayas, Eastern Asia, South America, and Africa. Other names for the practice include mixed cropping and intercropping. It may be contrasted with monoculture where one crop is grown in a field at a time. Both polycultures and monocultures may be subject to crop rotations or other changes with time (table).
Historical and modern uses
Americas: the Three Sisters
A well-known traditional example is the intercropping of maize, beans, and squash plants in the group called "the Three Sisters". In this combination, the maize provides a structure for the bean to grow on, the bean provides nitrogen for all of the plants, while the squash suppresses weeds on the ground. This crop mixture can be traced back some 3,000 years to civilizations in Mesoamerica. It illustrates how species in polycultures can sustain each other and minimize the need for human intervention. The majority of Latin American farmers continue to intercrop their maize, beans, and squash.
Asia: terrestrial and aquatic
In China, cereals have been intercropped with other plants for 1,000 years; the practice continues in the 21st century on some 28 to 34 million hectares. Polycultures involving fish and plants, have similarly been common in Eastern Asia for many centuries. In China, Japan, and Indonesia, traditional rice polycultures include rice-fish, rice-duck, and rice-fish-duck; modern aquaculture systems in the same region include shrimp and other shellfish grown in rice paddies.
Africa: cowpeas and complex mixed cropping
In Africa, polyculture has been practised for many centuries. This often involves legumes, especially the cowpea, alongside other crop plants. In Nigeria, complex mixed cropping can involve as many as 13 crops, with rice grown in between mounds holding cassava, cowpea, maize, peanut, pumpkin, Lagenaria, pigeon pea, melon, and a selection of yam species.
Impact of development
The introduction of pesticides, herbicides, and fertilizers made monoculture the predominant form of agriculture in developed countries from the 1950s. The prevalence of polycultures declined greatly in popularity at that time in more economically developed countries where it was deemed to yield less while requiring more labor. Polyculture farming has not disappeared entirely, and traditional polyculture systems continue to be an essential part of the food production system, especially in developing countries. Around 15% to 20% of the world's agriculture is estimated to rely on traditional polyculture systems. Due to climate change, polycultures are regaining popularity in more-developed countries as food producers seek to reduce their environmental and health impacts.
Advantages
Polycultures can benefit from multiple agroecological effects. Its principal advantages, according to Adamczewska-Sowińska and Sowiński 2020, are:
Diverse crops provide increased total yield, increased stability, and reduced risk of crop failure.
More efficient resource usage, including of soil minerals, nitrogen fixing, land, and labour.
Reduced inputs of fertilizers and pesticides.
Intercrops suppress weeds.
Reduced losses from pests, diseases, and weeds.
Multiple harvests per year are possible.
Physical, chemical, and structural properties of soil are improved, e.g. with combination of taproot and fibrous-rooted crops.
Improved soil cover reduces soil drying and erosion.
Better nutrition for people with varied crops.
Efficiency
A polyculture makes more efficient use of resources and produces more biomass overall than a monoculture. This is because of synergies between crops, and the creation of ecological niches for other organisms. However, the yield of each crop inside the polyculture is lower, not least because only part of the land area of the field is available to it.
Interactions between crops are complex, but mainly competitive, as each species struggles to obtain room to grow, sunlight, water, and soil nutrients. Many plants exude substances from their roots and other parts that inhibit other plants (allelopathy); some however are beneficial to other plants. Other interactions are beneficial, providing complementarity (as with the provision of nitrogen by legumes to other plants) or facilitation. Interactions vary widely by pairs of species; many recommendations have been made for suitable and unsuitable companion plants. For example, maize is well accompanied by amaranth, legumes, squashes, and sunflower, but not by cabbage, celery, or tomato. Cabbage, on the other hand, is well accompanied by beans, carrot, celery, marigold, and tomato, but not by onion or potato.
Improving the soil
Polycultures can benefit the soil by improving its fertility, its structure, and its biological activity. Soil fertility depends both on inorganic nutrients and on organic matter or humus. Deep-rooted companion crops such as legumes can improve soil structure: when they decay, they leave pores in the soil, improving drainage and allowing air into the soil. Some such as white lupin help cereals like wheat to take up phosphorus, a nutrient that often limits crop growth. Polyculture benefits soil microorganisms; in some forms, such as living mulches, it may also encourage earthworms (which in turn benefit soil structure), most likely by increasing the amount of organic matter in the soil.
Sustainability
Polyculture can reduce the release of pesticides and artificial fertilizers into the environment. Environmental impacts such as eutrophication of fresh water are greatly reduced.
Tillage, which removes essential microbes and nutrients from the soil, can be avoided in some forms of polyculture, especially permaculture. Land is used more productively.
Polyculture increases local biodiversity. Increasing crop diversity can increase pollination in nearby environments, as diverse plants attract a broader array of pollinators. This is an example of reconciliation ecology, accommodating biodiversity within human landscapes, and may form part of a biological pest control program.
Weed management
Both the density and the diversity of crops affect weed growth in polycultures. Having a greater density of plants reduces the available water, sunlight, and nutrient concentrations in the environment. Such a reduction is heightened with greater crop diversity as more potential resources are fully utilized. This level of competition makes polycultures particularly inhospitable to weeds. When they do grow, weeds can help polycultures, assisting in pest management by attracting natural enemies of pests. Further, they can act as hosts to arthropods that are beneficial to other plants in the polyculture.
Pest management
Pests are less predominant in polycultures than monocultures due to crop diversity. The reduced concentration of a target species in a polyculture attracts fewer pests specific to that crop. These specialized pests often have more difficulty locating host plants in a polyculture. Pests with more generalized preferences spend less time on a polyculture crop, resulting in lower yield loss (associational resistance). Because polycultures mimic naturally diverse ecosystems, general pests are less likely to distinguish between polycultures and the surrounding environment, and may have a smaller presence in the polyculture. Natural enemies or predators of pests are often attracted to the diversity of plants in a polyculture, helping to suppress pest populations.
Disease control
Plant diseases are less predominant in polycultures than monocultures. The disease-diversity hypothesis states that a greater diversity of plants leads to a decreased severity of disease. Because different plants are susceptible to different diseases, if a disease negatively impacts one crop, it will not necessarily spread to another and so the overall impact on yield is contained. However, diseases and pests do not necessarily have a decreased effect on a specific crop. If targeted by a specialized pest or disease, a crop in a polyculture will likely experience the same yield loss as its monoculture counterpart.
Human health
Many of the crops consumed today are calorie-rich crops that can lead to illnesses such as obesity, hypertension, and type II diabetes. Because it encourages plant diversity, polycultures can help increase diet diversity and improve people's nutrition by incorporating non-traditional foods into people's diets.
Disadvantages
Management
Polyculture's principal disadvantages, according to Adamczewska-Sowińska and Sowiński 2020, are:
Difficult to mechanize sowing and spraying of mixtures of crops which need different sowing depths, rates, and times, and different row spacings, as well as different fertilizers and pesticides, again at their own rates, times, and choice of substances. More manual labour may therefore be required.
Difficult to harvest and separate crops.
May not work well for cash crops and staple crops.
May make herbicide use difficult, again suiting one crop but not another.
Requires more management and farmer education.
Finding suitable combinations
The effects of competition can damage plants in certain polycultures. The diverse species chosen to grow together must have complementary needs. Due to the large number of cultivated plant species, finding and testing suitable combinations of plants is difficult; the alternative is to use an existing proven combination.
Practices
The kinds of plants that are grown, their spatial distribution, and the time that they spend growing together determine the specific type of polyculture that is implemented. There is no limit to the types of plants or animals that can be grown together to form a polyculture. The time overlap between plants can be asymmetrical as well, with one plant depending on the other for longer than is reciprocated, often due to differences in life spans.
Annual
Intercropping
When two or more crops are grown in complete spatial and temporal overlap with each other, the approach is described in agriculture as intercropping, and in horticulture as companion planting. Intercropping is particularly useful in plots with limited land availability. Intercropping can be mixed, in rows, in multi-row strips, or in a relay with crops interplanted at different times.
Strip cropping involves growing different plants in alternating strips, often in rotation. These may be ploughed along the contours of a steep hillside, and are typically considerably wider than a single row of a cereal crop. While strip cropping does not involve the complete intermixing of plant species, it provides many of the same benefits such as reducing soil erosion and aiding with nutrient cycling.
Legumes are among of the most commonly intercropped crops, specifically legume-cereal mixtures. Legumes fix atmospheric nitrogen into the soil so that it is available for consumption by other plants in a process known as nitrogen fixation. The presence of legumes consequently eliminates the need for man-made nitrogen fertilizers in intercrops.
Cover cropping
When a crop is grown alongside another plant that is not a crop, the combination is a form of cover cropping. If the non-crop plant is a weed, the combination is called a weedy culture. Grasses and legumes are the most common cover crops. Cover crops are greatly beneficial as they can help prevent soil erosion, physically suppress weeds, improve surface water retention, and, in the case of legumes, provide nitrogen compounds as well. Single-species cover cropping, in rotation with cash crops, increases agroecosystem diversity; a cover crop polyculture further increases that diversity, and there is evidence, using a range of cover crop treatments with or without legumes, that this increases ecosystem functionality, in terms of weed suppression, nitrogen retention, and above-ground biomass.
Living mulches
A living mulch is a polyculture involving a second crop, used mainly in horticulture. A main crop is grown to harvest; a second crop is sown beneath it to cover the soil, reducing erosion, and to form a green manure. Living mulches have been popular under orchard trees, and beneath perennial vegetables such as asparagus and rhubarb. It is considered suitable also for annual crops which grow for a long period before harvest and where the harvest is late in the year, such as aubergine, cabbage, celery, leek, maize, peppers, and tomato. Marigolds have a special place among weed-suppressing living mulches as they produce thiophenes which repel pests such as nematodes, and provide a second cash crop.
Care is required to minimise competition between the living mulch crop and the main crop. Indirect methods include selecting sowing dates or applying water and fertilizer directly to the main crop, or by choosing fast-growing varieties for the main crop. Direct methods include mowing the living mulch to inhibit its root growth, or applying a sublethal amount of herbicide to the living mulch.
For arable use, cereals such as wheat and barley, or broadleaved crops like rapeseed, can grow with living mulches of clover, vetch, or other legumes. However, since the yield of the main crop is reduced, this approach is not widely adopted by cereal farmers. In particular, living mulches like clover compete with young seedlings of the main crop, and need to be suppressed appropriately.
Mixed cropping
Mixed cropping differs from intercropping in having all the seeds mixed and sown together. The result mimics natural plant diversity. Handling is simple, but there can be competition between the crops, and any pesticide or fertilizer applied goes on all the crops. Harvesting too is a single operation, all the crops then being put to the same use.
Perennial
Agroforestry
In many Latin American countries, a popular form of polyculture is agroforestry, where trees and crops are grown together. Trees provide shade for the crops alongside organic matter and nutrients when they shed their leaves or fruits. The elaborate root systems of trees also help prevent soil erosion and increase the presence of microbes in the soil. In addition to benefiting crops, trees act as commodities harvested for paper, medicine, timber, and firewood.
Coffee is a shade-loving crop, and is traditionally shade-grown. In India, it is often grown under a natural forest canopy, replacing the shrub layer. A different polyculture system is used for coffee in Mexico, where the Coffea bushes are grown under leguminous trees in the genus Inga.
Varieties of annual arable crops
Perennial crop varieties of traditional annual arable crops can increase sustainability. They require less tillage and often have longer roots, reducing soil erosion and tolerating drought. Such varieties are being developed for rice, wheat, sorghum, pigeon pea, barley, and sunflowers. These can be combined with a leguminous cover crop such as alfalfa to fix nitrogen, reducing fertilizer inputs.
Rice, fish, and duck systems
In South-East Asia and China, rice-fish systems on rice paddies have raised freshwater fish as well as rice, producing a valuable additional crop and reducing eutrophication of neighbouring rivers.
Rice-duck farming is practised across tropical and subtropical Asia. A variant in Indonesia combines rice, fish, ducks and water fern for a resilient and productive permaculture system; the ducks eat the weeds that would otherwise limit rice growth, reducing labour and herbicides; the water fern fixes nitrogen; and the duck manure and fish manure reduce the need for fertilizer.
Integrated aquaculture
Integrated aquaculture is a form of aquaculture in which cultures of fish or shrimp are grown together with seaweed, shellfish, or micro-algae. Mono-species aquaculture poses problems for farmers and the environment. The harvesting of seaweed crops in mono-species aquaculture releases nitrates into the water and can lead to eutrophication. In seafood mono-species aquaculture, the greatest problem is the high cost of feed, more than half of which goes to waste, causing nitrogen release and eutrophication or algal blooms. Technological fixes such as bacterial bio-filters have proven costly. Integrated aquaculture uses plants both as food for the sea animals and for water filtration, absorbing nitrates and carbon dioxide. This reduces the need for chemical inputs. Plants such as seaweed grown alongside seafood have commercial value. Regenerative ocean farming sequesters carbon, growing a mix of seaweeds and shellfish for harvest, while helping to regenerate and restore local habitats like reef ecosystems.
| Technology | Agriculture_2 | null |
462396 | https://en.wikipedia.org/wiki/Baryogenesis | Baryogenesis | In physical cosmology, baryogenesis (also known as baryosynthesis) is the physical process that is hypothesized to have taken place during the early universe to produce baryonic asymmetry, i.e. the imbalance of matter (baryons) and antimatter (antibaryons) in the observed universe.
One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density. Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, it would normally be expected that the overall baryon number should be zero, as matter and antimatter should have been created in equal amounts. A number of theoretical mechanisms are proposed to account for this discrepancy, namely identifying conditions that favour symmetry breaking and the creation of normal matter (as opposed to antimatter). This imbalance has to be exceptionally small, on the order of 1 in every (≈) particles a small fraction of a second after the Big Bang. After most of the matter and antimatter was annihilated, what remained was all the baryonic matter in the current universe, along with a much greater number of bosons. Experiments reported in 2010 at Fermilab, however, seem to show that this imbalance is much greater than previously assumed. These experiments involved a series of particle collisions and found that the amount of generated matter was approximately 1% larger than the amount of generated antimatter. The reason for this discrepancy is not yet known.
Most grand unified theories explicitly break the baryon number symmetry, which would account for this discrepancy, typically invoking reactions mediated by very massive X bosons or massive Higgs bosons (). The rate at which these events occur is governed largely by the mass of the intermediate or particles, so by assuming these reactions are responsible for the majority of the baryon number seen today, a maximum mass can be calculated above which the rate would be too slow to explain the presence of matter today. These estimates predict that a large volume of material will occasionally exhibit a spontaneous proton decay, which has not been observed. Therefore, the imbalance between matter and antimatter remains a mystery.
Baryogenesis theories are based on different descriptions of the interaction between fundamental particles. Two main theories are electroweak baryogenesis (Standard Model), which would occur during the electroweak phase transition, and the GUT baryogenesis, which would occur during or shortly after the grand unification epoch. Quantum field theory and statistical physics are used to describe such possible mechanisms.
Baryogenesis is followed by primordial nucleosynthesis, when atomic nuclei began to form.
Background
The majority of ordinary matter in the universe is found in atomic nuclei, which are made of neutrons and protons. These nucleons are made up of smaller particles called quarks, and antimatter equivalents for each are predicted to exist by the Dirac equation in 1928. Since then, each kind of antiquark has been experimentally verified. Hypotheses investigating the first few instants of the universe predict a composition with an almost equal number of quarks and antiquarks. Once the universe expanded and cooled to a critical temperature of approximately , quarks combined into normal matter and antimatter and proceeded to annihilate up to the small initial asymmetry of about one part in five billion, leaving the matter around us. Free and separate individual quarks and antiquarks have never been observed in experiments—quarks and antiquarks are always found in groups of three (baryons), or bound in quark–antiquark pairs (mesons). Likewise, there is no experimental evidence that there are any significant concentrations of antimatter in the observable universe.
There are two main interpretations for this disparity: either the universe began with a small preference for matter (total baryonic number of the universe different from zero), or the universe was originally perfectly symmetric, but somehow a set of phenomena contributed to a small imbalance in favour of matter over time. The second point of view is preferred, although there is no clear experimental evidence indicating either of them to be the correct one.
GUT baryogenesis under Sakharov conditions
In 1967, Andrei Sakharov proposed a set of three necessary conditions that a baryon-generating interaction must satisfy to produce matter and antimatter at different rates. These conditions were inspired by the recent discoveries of the cosmic microwave background and CP-violation in the neutral kaon system. The three necessary "Sakharov conditions" are:
Baryon number violation.
C-symmetry and CP-symmetry violation.
Interactions out of thermal equilibrium.
Baryon number violation is a necessary condition to produce an excess of baryons over anti-baryons. But C-symmetry violation is also needed so that the interactions which produce more baryons than anti-baryons will not be counterbalanced by interactions which produce more anti-baryons than baryons. CP-symmetry violation is similarly required because otherwise equal numbers of left-handed baryons and right-handed anti-baryons would be produced, as well as equal numbers of left-handed anti-baryons and right-handed baryons. Finally, the interactions must be out of thermal equilibrium, since otherwise CPT symmetry would assure compensation between processes increasing and decreasing the baryon number.
Currently, there is no experimental evidence of particle interactions where the conservation of baryon number is broken perturbatively: this would appear to suggest that all observed particle reactions have equal baryon number before and after. Mathematically, the commutator of the baryon number quantum operator with the (perturbative) Standard Model hamiltonian is zero: . However, the Standard Model is known to violate the conservation of baryon number only non-perturbatively: a global U(1) anomaly. To account for baryon violation in baryogenesis, such events (including proton decay) can occur in Grand Unification Theories (GUTs) and supersymmetric (SUSY) models via hypothetical massive bosons such as the X boson.
The second condition – violation of CP-symmetry – was discovered in 1964 (direct CP-violation, that is violation of CP-symmetry in a decay process, was discovered later, in 1999). Due to CPT symmetry, violation of CP-symmetry demands violation of time inversion symmetry, or T-symmetry.
In the out-of-equilibrium decay scenario, the last condition states that the rate of a reaction which generates baryon-asymmetry must be less than the rate of expansion of the universe. In this situation the particles and their corresponding antiparticles do not achieve thermal equilibrium due to rapid expansion decreasing the occurrence of pair-annihilation.
In the Standard Model
The Standard Model can incorporate baryogenesis, though the amount of net baryons (and leptons) thus created may not be sufficient to account for the present baryon asymmetry. There is a required one excess quark per billion quark-antiquark pairs in the early universe in order to provide all the observed matter in the universe. This insufficiency has not yet been explained, theoretically or otherwise.
Baryogenesis within the Standard Model requires the electroweak symmetry breaking to be a first-order cosmological phase transition, since otherwise sphalerons wipe off any baryon asymmetry that happened up to the phase transition. Beyond this, the remaining amount of baryon non-conserving interactions is negligible.
The phase transition domain wall breaks the P-symmetry spontaneously, allowing for CP-symmetry violating interactions to break C-symmetry on both its sides. Quarks tend to accumulate on the broken phase side of the domain wall, while anti-quarks tend to accumulate on its unbroken phase side. Due to CP-symmetry violating electroweak interactions, some amplitudes involving quarks are not equal to the corresponding amplitudes involving anti-quarks, but rather have opposite phase (see CKM matrix and Kaon); since time reversal takes an amplitude to its complex conjugate, CPT-symmetry is conserved in this entire process.
Though some of their amplitudes have opposite phases, both quarks and anti-quarks have positive energy, and hence acquire the same phase as they move in space-time. This phase also depends on their mass, which is identical but depends both on flavor and on the Higgs VEV which changes along the domain wall. Thus certain sums of amplitudes for quarks have different absolute values compared to those of anti-quarks. In all, quarks and anti-quarks may have different reflection and transmission probabilities through the domain wall, and it turns out that more quarks coming from the unbroken phase are transmitted compared to anti-quarks.
Thus there is a net baryonic flux through the domain wall. Due to sphaleron transitions, which are abundant in the unbroken phase, the net anti-baryonic content of the unbroken phase is wiped off as anti-baryons are transformed into leptons. However, sphalerons are rare enough in the broken phase as not to wipe off the excess of baryons there. In total, there is net creation of baryons (as well as leptons).
In this scenario, non-perturbative electroweak interactions (i.e. the sphaleron) are responsible for the B-violation, the perturbative electroweak Lagrangian is responsible for the CP-violation, and the domain wall is responsible for the lack of thermal equilibrium and the P-violation; together with the CP-violation it also creates a C-violation in each of its sides.
Matter content in the universe
The central question to baryogenesis is what causes the preference for matter over antimatter in the universe, as well as the magnitude of this asymmetry. An important quantifier is the asymmetry parameter, given by
where and refer to the number density of baryons and antibaryons respectively and is the number density of cosmic background radiation photons.
According to the Big Bang model, matter decoupled from the cosmic background radiation (CBR) at a temperature of roughly kelvin, corresponding to an average kinetic energy of / () = . After the decoupling, the total number of CBR photons remains constant. Therefore, due to space-time expansion, the photon density decreases. The photon density at equilibrium temperature is given by
with as the Boltzmann constant, as the Planck constant divided by and as the speed of light in vacuum, and as Apéry's constant. At the current CBR photon temperature of , this corresponds to a photon density of around 411 CBR photons per cubic centimeter.
Therefore, the asymmetry parameter , as defined above, is not the "best" parameter. Instead, the preferred asymmetry parameter uses the entropy density ,
because the entropy density of the universe remained reasonably constant throughout most of its evolution. The entropy density is
with and as the pressure and density from the energy density tensor , and as the effective number of degrees of freedom for "massless" particles at temperature (in so far as holds),
for bosons and fermions with and degrees of freedom at temperatures and respectively. At the present epoch, .
Ongoing research efforts
Ties to dark matter
A possible explanation for the cause of baryogenesis is the decay reaction of B-mesogenesis. This phenomenon suggests that in the early universe, particles such as the B-meson decay into a visible Standard Model baryon as well as a dark antibaryon that is invisible to current observation techniques. The process begins by assuming a massive, long-lived, scalar particle that exists in the early universe before Big Bang nucleosynthesis. The exact behavior of is as yet unknown, but it is assumed to decay into b quarks and antiquarks in conditions outside of thermal equilibrium, thus satisfying one Sakharov condition. These b quarks form into B-mesons, which immediately hadronize into oscillating CP-violating states, thus satisfying another Sakharov condition. These oscillating mesons then decay down into the baryon-dark antibaryon pair previously mentioned, , where is the parent B-meson, is the dark antibaryon, is the visible baryon, and is any extra light meson daughters required to satisfy other conservation laws in this particle decay. If this process occurs fast enough, the CP-violation effect gets carried over to the dark matter sector. However, this contradicts (or at least challenges) the last Sakharov condition, since the expected matter preference in the visible universe is balanced by a new antimatter preference in the dark matter of the universe and total baryon number is conserved.
B-mesogenesis results in missing energy between the initial and final states of the decay process, which, if recorded, could provide experimental evidence for dark matter. Particle laboratories equipped with B-meson factories such as Belle and BaBar are extremely sensitive to B-meson decays involving missing energy and currently have the capability to detect the channel. The LHC is also capable of searching for this interaction since it produces several orders of magnitude more B-mesons than Belle or BaBar, but there are more challenges from the decreased control over B-meson initial energy in the accelerator.
| Physical sciences | Physical cosmology | Astronomy |
462534 | https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra%20equations | Lotka–Volterra equations | The Lotka–Volterra equations, also known as the Lotka–Volterra predator–prey model, are a pair of first-order nonlinear differential equations, frequently used to describe the dynamics of biological systems in which two species interact, one as a predator and the other as prey. The populations change through time according to the pair of equations:
where
the variable is the population density of prey (for example, the number of rabbits per square kilometre);
the variable is the population density of some predator (for example, the number of foxes per square kilometre);
and represent the instantaneous growth rates of the two populations;
represents time;
The prey's parameters, and , describe, respectively, the maximum prey per capita growth rate, and the effect of the presence of predators on the prey death rate.
The predator's parameters, , , respectively describe the predator's per capita death rate, and the effect of the presence of prey on the predator's growth rate.
All parameters are positive and real.
The solution of the differential equations is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping.
The Lotka–Volterra system of equations is an example of a Kolmogorov population model (not to be confused with the better known Kolmogorov equations), which is a more general framework that can model the dynamics of ecological systems with predator–prey interactions, competition, disease, and mutualism.
Biological interpretation and model assumptions
The prey are assumed to have an unlimited food supply and to reproduce exponentially, unless subject to predation; this exponential growth is represented in the equation above by the term . The rate of predation on the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by . If either or is zero, then there can be no predation. With these two terms the prey equation above can be interpreted as follows: the rate of change of the prey's population is given by its own growth rate minus the rate at which it is preyed upon.
The term represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used, as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). The term represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey. Hence the equation expresses that the rate of change of the predator's population depends upon the rate at which it consumes prey, minus its intrinsic death rate.
The Lotka–Volterra predator-prey model makes a number of assumptions about the environment and biology of the predator and prey populations:
The prey population finds ample food at all times.
The food supply of the predator population depends entirely on the size of the prey population.
The rate of change of population is proportional to its size.
During the process, the environment does not change in favour of one species, and genetic adaptation is inconsequential.
Predators have limitless appetite.
Both populations can be described by a single variable. This amounts to assuming that the populations do not have a spatial or age distribution that contributes to the dynamics.
Biological relevance of the model
None of the assumptions above are likely to hold for natural populations. Nevertheless, the Lotka–Volterra model shows two important properties of predator and prey populations and these properties often extend to variants of the model in which these assumptions are relaxed:
Firstly, the dynamics of predator and prey populations have a tendency to oscillate. Fluctuating numbers of predators and prey have been observed in natural populations, such as the lynx and snowshoe hare data of the Hudson's Bay Company and the moose and wolf populations in Isle Royale National Park.
Secondly, the population equilibrium of this model has the property that the prey equilibrium density (given by ) depends on the predator's parameters, and the predator equilibrium density (given by ) on the prey's parameters. This has as a consequence that an increase in, for instance, the prey growth rate, , leads to an increase in the predator equilibrium density, but not the prey equilibrium density. Making the environment better for the prey benefits the predator, not the prey (this is related to the paradox of the pesticides and to the paradox of enrichment). A demonstration of this phenomenon is provided by the increased percentage of predatory fish caught had increased during the years of World War I (1914–18), when prey growth rate was increased due to a reduced fishing effort.
A further example is provided by the experimental iron fertilization of the ocean. In several experiments large amounts of iron salts were dissolved in the ocean. The expectation was that iron, which is a limiting nutrient for phytoplankton, would boost growth of phytoplankton and that it would sequester carbon dioxide from the atmosphere. The addition of iron typically leads to a short bloom in phytoplankton, which is quickly consumed by other organisms (such as small fish or zooplankton) and limits the effect of enrichment mainly to increased predator density, which in turn limits the carbon sequestration. This is as predicted by the equilibrium population densities of the Lotka–Volterra predator-prey model, and is a feature that carries over to more elaborate models in which the restrictive assumptions of the simple model are relaxed.
Applications to economics and marketing
The Lotka–Volterra model has additional applications to areas such as economics and marketing. It can be used to describe the dynamics in a market with several competitors, complementary platforms and products, a sharing economy, and more. There are situations in which one of the competitors drives the other competitors out of the market and other situations in which the market reaches an equilibrium where each firm stabilizes on its market share. It is also possible to describe situations in which there are cyclical changes in the industry or chaotic situations with no equilibrium and changes are frequent and unpredictable.
History
The Lotka–Volterra predator–prey model was initially proposed by Alfred J. Lotka in the theory of autocatalytic chemical reactions in 1910. This was effectively the logistic equation, originally derived by Pierre François Verhulst. In 1920 Lotka extended the model, via Andrey Kolmogorov, to "organic systems" using a plant species and a herbivorous animal species as an example and in 1925 he used the equations to analyse predator–prey interactions in his book on biomathematics. The same set of equations was published in 1926 by Vito Volterra, a mathematician and physicist, who had become interested in mathematical biology. Volterra's enquiry was inspired through his interactions with the marine biologist Umberto D'Ancona, who was courting his daughter at the time and later was to become his son-in-law. D'Ancona studied the fish catches in the Adriatic Sea and had noticed that the percentage of predatory fish caught had increased during the years of World War I (1914–18). This puzzled him, as the fishing effort had been very much reduced during the war years and, as prey fish the preferred catch, one would intuitively expect this to increase of prey fish percentage. Volterra developed his model to explain D'Ancona's observation and did this independently from Alfred Lotka. He did credit Lotka's earlier work in his publication, after which the model has become known as the "Lotka-Volterra model".
The model was later extended to include density-dependent prey growth and a functional response of the form developed by C. S. Holling; a model that has become known as the Rosenzweig–MacArthur model. Both the Lotka–Volterra and Rosenzweig–MacArthur models have been used to explain the dynamics of natural populations of predators and prey.
In the late 1980s, an alternative to the Lotka–Volterra predator–prey model (and its common-prey-dependent generalizations) emerged, the ratio dependent or Arditi–Ginzburg model. The validity of prey- or ratio-dependent models has been much debated.
The Lotka–Volterra equations have a long history of use in economic theory; their initial application is commonly credited to Richard Goodwin in 1965 or 1967.
Solutions to the equations
The equations have periodic solutions. These solutions do not have a simple expression in terms of the usual trigonometric functions, although they are quite tractable.
If none of the non-negative parameters , , , vanishes, three can be absorbed into the normalization of variables to leave only one parameter: since the first equation is homogeneous in , and the second one in , the parameters β/α and δ/γ are absorbable in the normalizations of and respectively, and into the normalization of , so that only remains arbitrary. It is the only parameter affecting the nature of the solutions.
A linearization of the equations yields a solution similar to simple harmonic motion with the population of predators trailing that of prey by 90° in the cycle.
A simple example
Suppose there are two species of animals, a rabbit (prey) and a fox (predator). If the initial densities are 10 rabbits and 10 foxes per square kilometre, one can plot the progression of the two species over time; given the parameters that the growth and death rates of rabbits are 1.1 and 0.4 while that of foxes are 0.1 and 0.4 respectively. The choice of time interval is arbitrary.
One may also plot solutions parametrically as orbits in phase space, without representing time, but with one axis representing the number of prey and the other axis representing the densities of predators for all times.
This corresponds to eliminating time from the two differential equations above to produce a single differential equation
relating the variables x (predator) and y (prey). The solutions of this equation are closed curves. It is amenable to separation of variables: integrating
yields the implicit relationship
where V is a constant quantity depending on the initial conditions and conserved on each curve.
An aside: These graphs illustrate a serious potential limitation in the application as a biological model: for this specific choice of parameters, in each cycle, the rabbit population is reduced to extremely low numbers, yet recovers (while the fox population remains sizeable at the lowest rabbit density). In real-life situations, however, chance fluctuations of the discrete numbers of individuals might cause the rabbits to actually go extinct, and, by consequence, the foxes as well. This modelling problem has been called the "atto-fox problem", an atto-fox being a notional 10−18 of a fox. A density of 10−18 foxes per square kilometre equates to an average of approximately 5×10−10 foxes on the surface of the earth, which in practical terms means that foxes are extinct.
Hamiltonian structure of the system
Since the quantity is conserved over time, it plays role of a Hamiltonian function of the system. To see this we can define Poisson bracket as follows
. Then Hamilton's equations read
The variables and are not canonical, since . However, using transformations and we came up to a canonical form of the Hamilton's equations featuring the Hamiltonian :
The Poisson bracket for the canonical variables now takes the standard form .
Phase-space plot of a further example
A less extreme example covers:
, , . Assume , quantify thousands each. Circles represent prey and predator initial conditions from = = 0.9 to 1.8, in steps of 0.1. The fixed point is at (1, 1/2).
Dynamics of the system
In the model system, the predators thrive when prey is plentiful but, ultimately, outstrip their food supply and decline. As the predator population is low, the prey population will increase again. These dynamics continue in a population cycle of growth and decline.
Population equilibrium
Population equilibrium occurs in the model when neither of the population levels is changing, i.e. when both of the derivatives are equal to 0:
The above system of equations yields two solutions:
and
Hence, there are two equilibria.
The first solution effectively represents the extinction of both species. If both populations are at 0, then they will continue to be so indefinitely. The second solution represents a fixed point at which both populations sustain their current, non-zero numbers, and, in the simplified model, do so indefinitely. The levels of population at which this equilibrium is achieved depend on the chosen values of the parameters α, β, γ, and δ.
Stability of the fixed points
The stability of the fixed point at the origin can be determined by performing a linearization using partial derivatives.
The Jacobian matrix of the predator–prey model is
and is known as the community matrix.
First fixed point (extinction)
When evaluated at the steady state of , the Jacobian matrix becomes
The eigenvalues of this matrix are
In the model and are always greater than zero, and as such the sign of the eigenvalues above will always differ. Hence the fixed point at the origin is a saddle point.
The instability of this fixed point is of significance. If it were stable, non-zero populations might be attracted towards it, and as such the dynamics of the system might lead towards the extinction of both species for many cases of initial population levels. However, as the fixed point at the origin is a saddle point, and hence unstable, it follows that the extinction of both species is difficult in the model. (In fact, this could only occur if the prey were artificially completely eradicated, causing the predators to die of starvation. If the predators were eradicated, the prey population would grow without bound in this simple model.) The populations of prey and predator can get infinitesimally close to zero and still recover.
Second fixed point (oscillations)
Evaluating J at the second fixed point leads to
The eigenvalues of this matrix are
As the eigenvalues are both purely imaginary and conjugate to each other, this fixed point must either be a center for closed orbits in the local vicinity or an attractive or repulsive spiral. In conservative systems, there must be closed orbits in the local vicinity of fixed points that exist at the minima and maxima of the conserved quantity. The conserved quantity is derived above to be on orbits. Thus orbits about the fixed point are closed and elliptic, so the solutions are periodic, oscillating on a small ellipse around the fixed point, with a frequency and period .
As illustrated in the circulating oscillations in the figure above, the level curves are closed orbits surrounding the fixed point: the levels of the predator and prey populations cycle and oscillate without damping around the fixed point with frequency .
The value of the constant of motion , or, equivalently, , , can be found for the closed orbits near the fixed point.
Increasing moves a closed orbit closer to the fixed point. The largest value of the constant is obtained by solving the optimization problem
The maximal value of K is thus attained at the stationary (fixed) point and amounts to
where is Euler's number.
| Biology and health sciences | Ecology | Biology |
462707 | https://en.wikipedia.org/wiki/Brush | Brush | A brush is a common tool with bristles, wire or other filaments. It generally consists of a handle or block to which filaments are affixed in either a parallel or perpendicular orientation, depending on the way the brush is to be gripped during use. The material of both the block and bristles or filaments is chosen to withstand hazards of its intended use, such as corrosive chemicals, heat or abrasion. It is used for cleaning, grooming hair, make up, painting, surface finishing and for many other purposes. It is one of the most basic and versatile tools in use today, and the average household may contain several dozen varieties.
History
When houses were first inhabited, homeowners used branches taken from shrubs to sweep up dirt, hence using the first brushes. In 1859, the first brush factory in America was set up in New York.
Manufacture
A common way of setting the bristles, brush filaments, in the brush is the staple or anchor set brush in which the filament is forced with a staple by the middle into a hole with a special driver and held there by the pressure against all of the walls of the hole and the portions of the staple nailed to the bottom of the hole. The staple can be replaced with a kind of anchor, which is a piece of rectangular profile wire that is anchored to the wall of the hole, like in most toothbrushes. Another way to attach the bristles to the surface can be found in a fused brush, in which instead of being inserted into a hole, a plastic fibre is welded to another plastic surface, giving the option to use different diameters of bristles in the same brush.
Configurations include twisted-in wire (e.g. bottle brushes), cylinders and disks (with bristles spread in one face or radially).
By function
Application of material
The action of such brushes is mostly from the sides, not the tip, contact with which releases material held by capillary action.
Finger-print forensic brush
Gilding brush
Ink brush
Makeup brush
Mascara brush
Nail-polish brush
Paintbrush (fine art or house decoration)
Pastry brush
Shaving brush
Shoe-polish brush (polish applicator)
Wall-paper brush
Combing
The action of these brushes is more akin to combing than brushing, that is they are used to straighten and untangle filaments. Certain varieties of hairbrush are however designed to brush the scalp itself free of material such as dead skin (dandruff) and to invigorate the skin of the scalp.
Grooming brush
Hairbrush
Other
Brush (electric), used on electrical motors
Acid brush, described as consisting of glass threads, in 1906
Acid brush, described as consisting of horsehair held in a crimped copper tube, in 1922
Clothes brush or lint brush
Magnetic brush
Medical sampling brush
Brush percussion mallets
Stippling brush (neither applies nor removes material, but merely adds pattern)
Cleaning
Brushes used for cleaning come in various sizes, ranging from even smaller than that of a toothbrush, to the standard household version accompanied by a dustpan, to 36″ deck brushes. There are brushes for cleaning tiny cracks and crevices and brushes for cleaning enormous warehouse floors. Brushes perform a multitude of cleaning tasks. For example, brushes lightly dust the tiniest figurine, they help scrub stains out of clothing and shoes, they remove grime from tires, and they remove the dirt and debris found on floors with the help of a dust pan. Specific brushes are used for diverse activities from cleaning vegetables, as a toilet brush, washing glass, cleaning tiles, and as a mild abrasive for sanding.
| Technology | Artist's tools | null |
463271 | https://en.wikipedia.org/wiki/Salbutamol | Salbutamol | Salbutamol, also known as albuterol and sold under the brand name Ventolin among others, is a medication that opens up the medium and large airways in the lungs. It is a short-acting β2 adrenergic receptor agonist that causes relaxation of airway smooth muscle. It is used to treat asthma, including asthma attacks and exercise-induced bronchoconstriction, as well as chronic obstructive pulmonary disease (COPD). It may also be used to treat high blood potassium levels. Salbutamol is usually used with an inhaler or nebulizer, but it is also available in a pill, liquid, and intravenous solution. Onset of action of the inhaled version is typically within 15 minutes and lasts for two to six hours.
Common side effects include shakiness, headache, fast heart rate, dizziness, and feeling anxious. Serious side effects may include worsening bronchospasm, irregular heartbeat, and low blood potassium levels. It can be used during pregnancy and breastfeeding, but safety is not entirely clear.
Salbutamol was patented in 1966 in Britain and became commercially available in the UK in 1969. It was approved for medical use in the United States in 1982. It is on the World Health Organization's List of Essential Medicines. Salbutamol is available as a generic medication. In 2022, it was the seventh most commonly prescribed medication in the United States, with more than 59million prescriptions.
Medical uses
Salbutamol is typically used to treat bronchospasm (due to any cause—allergic asthma or exercise-induced), as well as chronic obstructive pulmonary disease. It is also one of the most common medicines used in rescue inhalers (short-term bronchodilators to alleviate asthma attacks).
As a β2 agonist, salbutamol also has use in obstetrics. Intravenous salbutamol can be used as a tocolytic to relax the uterine smooth muscle to delay premature labor. While preferred over agents such as atosiban and ritodrine, its role has largely been replaced by the calcium channel blocker nifedipine, which is more effective and better tolerated.
Salbutamol has been used to treat acute hyperkalemia, as it stimulates potassium flow into cells, thus lowering the potassium in the blood.
Two recent studies have suggested that salbutamol reduces the symptoms of newborns and adolescents with myasthenia gravis and transient neonatal myasthenia gravis.
Adverse effects
The most common side effects are fine tremor, anxiety, headache, muscle cramps, dry mouth, and palpitation. Other symptoms may include tachycardia, arrhythmia, flushing of the skin, myocardial ischemia (rare), and disturbances of sleep and behaviour. Rarely occurring, but of importance, are allergic reactions of paradoxical bronchospasms, urticaria (hives), angioedema, hypotension, and collapse. High doses or prolonged use may cause hypokalemia, which is of concern especially in patients with kidney failure and those on certain diuretics and xanthine derivatives.
Salbutamol metered dose inhalers have been described as the "single biggest source of carbon emissions from NHS medicines prescribing" due to the propellants used in the inhalers. Dry powder inhalers are recommended as a low-carbon alternative.
Pharmacology
The tertiary butyl group in salbutamol makes it more selective for β2 receptors, which are the predominant receptors on the bronchial smooth muscles. Activation of these receptors causes adenylyl cyclase to convert ATP to cAMP, beginning the signalling cascade that ends with the inhibition of myosin phosphorylation and lowering the intracellular concentration of calcium ions (myosin phosphorylation and calcium ions are necessary for muscle contractions). The increase in cAMP also inhibits inflammatory cells in the airway, such as basophils, eosinophils, and most especially mast cells, from releasing inflammatory mediators and cytokines. Salbutamol and other β2 receptor agonists also increase the conductance of channels sensitive to calcium and potassium ions, leading to hyperpolarization and relaxation of bronchial smooth muscles.
Salbutamol is either filtered out by the kidneys directly or is first metabolized into the 4′-O-sulfate, which is excreted in the urine.
Chemistry
Salbutamol is sold as a racemic mixture. The (R)-(−)-enantiomer (CIP nomenclature) is shown in the image at right (top), and is responsible for the pharmacologic activity; the (S)-(+)-enantiomer (bottom) blocks metabolic pathways associated with elimination of itself and of the pharmacologically active enantiomer (R). The slower metabolism of the (S)-(+)-enantiomer also causes it to accumulate in the lungs, which can cause airway hyperreactivity and inflammation. Potential formulation of the R form as an enantiopure drug is complicated by the fact that the stereochemistry is not stable, but rather the compound undergoes racemization within a few days to weeks, depending on pH.
The direct separation of Salbutamol enantiomers and the control of enantiomeric purity has been described by thin-layer chromatography.
History
Salbutamol was discovered in 1966, by a research team led by David Jack at the Allen and Hanburys laboratory (now a subsidiary of Glaxo) in Ware, Hertfordshire, England, and was launched as Ventolin in 1969.
The 1972 Munich Olympics were the first Olympics where anti-doping measures were deployed, and at that time β2 agonists were considered to be stimulants with high risk of abuse for doping. Inhaled salbutamol was banned from those games, but by 1986 was permitted (although oral β2 agonists were not). After a steep rise in the number of athletes taking β2 agonists for asthma in the 1990s, Olympic athletes were required to provide proof that they had asthma in order to be allowed to use inhaled β2 agonists.
In February 2020, the U.S. Food and Drug Administration (FDA) approved the first generic of an albuterol sulfate inhalation aerosol for the treatment or prevention of bronchospasm in people four years of age and older with reversible obstructive airway disease and the prevention of exercise-induced bronchospasm in people four years of age and older. The FDA granted approval of the generic albuterol sulfate inhalation aerosol to Perrigo Pharmaceutical.
In April 2020, the FDA approved the first generic of Proventil HFA (albuterol sulfate) metered dose inhaler, 90 μg per inhalation, for the treatment or prevention of bronchospasm in patients four years of age and older who have reversible obstructive airway disease, as well as the prevention of exercise-induced bronchospasm in this age group. The FDA granted approval of this generic albuterol sulfate inhalation aerosol to Cipla Limited.
Society and culture
In 2020, generic versions were approved in the United States.
Names
Salbutamol is the international nonproprietary name (INN) while albuterol is the United States Adopted Name (USAN). The drug is usually manufactured and distributed as the sulfate salt (salbutamol sulfate).
It was first sold by Allen & Hanburys (UK) under the brand name Ventolin, and has been used for the treatment of asthma ever since. The drug is marketed under many names worldwide.
Misuse
Albuterol and other beta-2 adrenergic agonists are used by some recreational bodybuilders.
Doping
there was no evidence that an increase in physical performance occurs after inhaling salbutamol, but there are various reports for benefit when delivered orally or intravenously. In spite of this, salbutamol required "a declaration of Use in accordance with the International Standard for Therapeutic Use Exemptions" under the 2010 WADA prohibited list. This requirement was relaxed when the 2011 list was published to permit the use of "salbutamol (maximum 1600 micrograms over 24 hours) and salmeterol when taken by inhalation in accordance with the manufacturers' recommended therapeutic regimen."
Abuse of the drug may be confirmed by detection of its presence in plasma or urine, typically exceeding 1,000 ng/mL. The window of detection for urine testing is on the order of just 24 hours, given the relatively short elimination half-life of the drug, estimated at between 5 and 6 hours following oral administration of 4 mg.
Research
Salbutamol has been studied in subtypes of congenital myasthenic syndrome associated with mutations in Dok-7.
It has also been tested in a trial aimed at treatment of spinal muscular atrophy; it is speculated to modulate the alternative splicing of the SMN2 gene, increasing the amount of the SMN protein, the deficiency of which is regarded as a cause of the disease.
Albuterol increases energy expenditure by 10-15 percent at a therapeutic dose for asthma and around 25 percent at a higher, oral dose. In several human studies, albuterol increased lean body mass, reduced fat mass, and caused lipolysis; it has been studied for use as an anti-obesity and anti-muscle wasting medication when taken orally.
Veterinary use
Salbutamol's low toxicity makes it safe for other animals and thus is the medication of choice for treating acute airway obstruction in most species. It is usually used to treat bronchospasm or coughs in cats and dogs and used as a bronchodilator in horses with recurrent airway obstruction; it can also be used in emergencies to treat asthmatic cats.
Toxic effects require an extremely high dose, and most overdoses are due to dogs chewing on and puncturing an inhaler or nebulizer vial.
| Biology and health sciences | Specific drugs | Health |
463327 | https://en.wikipedia.org/wiki/Brussels%20Metro | Brussels Metro | The Brussels Metro ( ; ) is a rapid transit system serving a large part of the Brussels-Capital Region of Belgium. It consists of four conventional metro lines and three premetro lines. The metro-grade lines are M1, M2, M5, and M6 with some shared sections, covering a total of , with 59 metro-only stations. The premetro network consists of three tram lines (T4, T7, and T10) that partly travel over underground sections that were intended to be eventually converted into metro lines. Underground stations in the premetro network use the same design as metro stations. A few short underground tramway sections exist, so there is a total of of underground metro and tram network. There are a total of 69 metro and premetro stations as of 2011.
The Brussels Metro was planned at the beginning of the 1960s to become a fully underground network. The original network, running between De Brouckère and Schuman, was inaugurated on 17 December 1969 as premetro tramways, which were later, in 1976, converted into the common section of the first two metro lines. These lines were then considered a single line with two branches, between De Brouckère and Tomberg and De Brouckère and Beaulieu. On 4 April 2009, with the completion of the "loop" of line 2 connecting Delacroix and Gare de l'Ouest/Weststation, the Brussels Metro was significantly reorganised.
The Brussels Metro is administered by the Brussels Intercommunal Transport Company (STIB/MIVB). In 2011, it was used for 125.8 million journeys, and it was used for 138.3 million journeys in 2012. It is also an important means of transport, connecting with six railway stations of the National Railway Company of Belgium (NMBS/SNCB), and many tram and bus stops operated by STIB/MIVB, as well as with Flemish De Lijn and Walloon TEC bus stops. Additionally, some metro stations offer suburban railway links as part of the Brussels Regional Express Network (RER/GEN) system.
On 22 March 2016, Maelbeek/Maalbeek metro station was bombed, killing about 20 people and injuring 106. The Islamic State of Iraq and the Levant (ISIL) claimed responsibility.
History
Early history
The Brussels Intercommunal Transport Company ( or , or ) was created in 1954. The first underground tramway (or premetro) line was built between 1965 and 1969, from Schuman to De Brouckère. In 1970, a second line was opened, between Madou and Porte de Namur/Naamsepoort. An underground station at Diamant was opened in 1972 and the Greater Ring line was extended from Diamant to Boileau in 1975. This underground tramway section has not been developed further, and it is used by tramway lines 7 and 25. Rogier station was inaugurated in 1974.
Opening and extensions
On 20 September 1976, the first metro line opened. One branch went from De Brouckère to Beaulieu (in Auderghem), and the other one linked De Brouckère with Tomberg (in Woluwe-Saint-Lambert). The same year, the North–South Axis (premetro) was opened between Gare du Nord/Noordstation (Brussels-North Station) and Lemonnier. In 1977, two new stations were built; Sainte-Catherine/Sint-Katelijne, which replaced De Brouckère as the last western stop in the City of Brussels, and Demey, which replaced Beaulieu as the last stop of the southern branch.
The next extension was the opening of stations in Molenbeek-Saint-Jean (Beekkant, the new terminus, Etangs Noirs/Zwarte Vijvers and Comte de Flandre/Graaf van Vlaanderen). In 1982, line 1 was split into line 1A from Bockstael (in Laeken, a former municipality now merged with the City of Brussels) to Demey (Auderghem) and line 1B from Saint-Guidon/Sint-Guido (in Anderlecht) to Alma (at the Université catholique de Louvain (UCLouvain) campus in Woluwe-Saint-Lambert). Three years later, line 1A was extended to Heysel/Heizel (near the site of the 1958 World's Fair and the Heysel Stadium) at one end and to Herrmann-Debroux at the other. That year also saw the opening of Veeweyde/Veeweide on line 1B, and Louise/Louiza on the premetro line under the Small Ring (from Louise/Louiza to Rogier).
This line was extended to Simonis the next year and opened as metro line 2 in 1988, from Simonis to Gare du Midi/Zuidstation (Brussels-South Station). Crainhem/Kraainem and Stockel/Stokkel also opened that year on line 1B. At the other end of this line, Bizet opened in 1992. It was then the turn of line 2 to reach Clemenceau in 1993. The premetro section known as the North–South Axis, sometimes referred to as line 3, was extended to Albert that year with five new premetro stations (Brussels-South, Porte de Hal/Hallepoort, Parvis de Saint-Gilles/Sint-Gillis Voorplein, Horta and Albert).
In 1998, Roi Baudouin/Koning Boudewijn opened on line 1A. Four stations opened in 2003 on line 1B; La Roue/Het Rad, CERIA/COOVI, Eddy Merckx, and Erasme/Erasmus. With the opening of Delacroix in September 2006, line 2 was extended beyond Clemenceau. A further extension to Gare de l'Ouest/Weststation (Brussels-West Station) in April 2009 closed the "loop" of line 2 and led to a major restructuring of metro service.
The Brussels Metro system is complemented by an S-train network serving the broader metropolitan region and opened in late 2015.
2016 Brussels bombings
On 22 March 2016, the Islamic State bombed Maelbeek/Maalbeek metro station in a terrorist attack that coincided with another bomb attack at Brussels Airport. The bombing at Maelbeek station killed twenty people. The incident prompted the temporary closure of the entire system, and a major reduction in service for several weeks. A fundamental review of security procedures on the metro is underway as of 2016.
Network map
Lines and stations
The premetro lines are powered (like the ground-level tram lines) by overhead lines at 600 V DC; the conventional metro lines use instead an elevated third rail at 900 V DC. They all use standard-gauge. Conventional metro platforms are "high platforms", built flush with the floor of the metro compartments; premetro platforms are the same height, but with a lowered section at least as long as the longest tram, for compatibility with tramways which must be able to take passengers from the sidewalk of a street, or even from the street floor itself. Upgrading a line from premetro to metro service includes, among others, raising the whole platform to metro height and replacing the overhead line by a third rail power supply .
Metro
There are four conventional metro lines and, , 59 stations (not including premetro stations). Most stations are underground, although some on lines 5 and 6 are at ground level. On 4 April 2009, the connection at Gare de l'Ouest/Weststation that enables line 2 to form a circular line was put into service. As a consequence, the metro network was significantly reorganised. The development plan for this change and related tram and bus network changes was approved by the Brussels-Capital Region in July 2005.
As of 4 April 2009, the four lines are as follows:
Line 1 from Gare de l'Ouest/Weststation to the west to Stockel/Stokkel at the east end (formerly part of line 1B);
Line 2 is a loop starting and ending in Simonis via the eastern side of the Small Ring (an extension of former line 2 from Delacroix north-bound to Simonis);
Line 5 from Erasme/Erasmus to the south-west to Herrmann-Debroux to the south-east (combines parts of former lines 1A and 1B);
Line 6 from Roi Baudouin/Koning Boudewijn to the north-west to Simonis (including the loop of the newly extended line 2; combines the former line 2, the new connection, and a branch of the former line 1A).
Premetro
Line 4 and Line 10 are tram lines using the North–South Axis tunnel which crosses the city centre from Brussels-North via Brussels-South to Albert. Line 4 runs from Brussels-North to Stalle Parking in the south. Line 10 runs from Hôpital Militaire/Militair Hospitaal in the north to Churchill in the south. This North–South Axis is being upgraded to metro service; works have begun in 2019, including north-eastward prolongation of the metro tunnel, and the transition to conventional metro is foreseen for 2030.
Line 7 is the main line of the Greater Ring, replacing Tram 23 and Tram 24 as of 14 March 2011. It services the Heysel/Heizel, runs under Laeken Park and then via the Greater Ring to the terminus of Line 10 to terminate one stop later at Vanderkindere for connections to tram lines 4, 10 and 92. The somewhat shorter Line 25 also runs the Greater Ring premetro, but with different termini at both ends, and the southern terminus connecting to Boondael/Boondaal railway station.
Ticketing
MoBIB is the STIB/MIVB electronic smart card, introduced in 2007, replacing the discontinued paper tickets. It uses contactless technology based on the Calypso system originally developed for Paris and is in some ways similar e.g. to the London's Oyster card. All metro stations, buses and trams have MoBIB readers.
There is a very wide range of ticket shares to meet different needs. The cost of travel with STIB/MIVB means of transport (metro, tram and bus) is calculated per hour. As long as the journey does not exceed one hour after the first validation of the ticket, it is possible, for example, to switch from a bus to a metro train within the STIB/MIVB network without paying a second time (i.e. a new validation is required but will not be charged). Each trip has a different cost depending on the type of support purchased. Passengers can purchase monthly passes, yearly passes, 1 and 10-trip tickets and daily and 3-day passes. These can be bought over the Internet, but require customers to have a smart card reader. GO vending machines accept coins, local and international chip and PIN credit and debit cards.
Moreover, a complimentary interticketing system means that a combined STIB/MIVB ticket holder can also use the train network operated by NMBS/SNCB and/or long-distance buses and commuter services operated by De Lijn or TEC, depending on the option. With this ticket, a single journey can include multiple stages across the different modes of transport and networks.
Following a successful trial in 2019, and expedited by the COVID-19 pandemic, it is now possible to pay for STIB/MIVB journeys using a contactless bank card.
Rolling stock
The Brussels Metro is served by 217 carriages of M1-M5 series, manufactured by La Brugeoise et Nivelles, ACEC, Bombardier Transportation, Alstom and CAF and delivered between 1976 and 1999, as well as 21 six-car trainsets of the new M6 series (also known as "Boa"), manufactured by CAF and delivered between 2007 and 2012. A new train type known as M7 series was commissioned on lines 1 and 5 in 2021 and will support full automation.
Future
A new metro line 3 is being created from Brussels-North through Schaerbeek towards Bordet. The plan was finally approved in 2013, aiming to start construction in 2018 and operation in 2022. Eventually, this would be linked up with a new southbound line to Uccle (Héros/Helden), which will not be finished before 2025.
In 2021, the premetro line from Brussels-North to Albert, the route of tram lines 3 and 4, is being upgraded and will be incorporated into the new metro line 3. The present Lemonnier station will be replaced by a new station named after Toots Thielemans under the /.
| Technology | Europe_2 | null |
463408 | https://en.wikipedia.org/wiki/Airframe | Airframe | The mechanical structure of an aircraft is known as the airframe. This structure is typically considered to include the fuselage, undercarriage, empennage and wings, and excludes the propulsion system.
Airframe design is a field of aerospace engineering that combines aerodynamics, materials technology and manufacturing methods with a focus on weight, strength and aerodynamic drag, as well as reliability and cost.
History
Modern airframe history began in the United States during the Wright Flyer's maiden flight, showing the potential of fixed-wing designs in aircraft.
In 1912 the Deperdussin Monocoque pioneered the light, strong and streamlined monocoque fuselage formed of thin plywood layers over a circular frame, achieving .
First World War
Many early developments were spurred by military needs during World War I. Well known aircraft from that era include the Dutch designer Anthony Fokker's combat aircraft for the German Empire's , and U.S. Curtiss flying boats and the German/Austrian Taube monoplanes. These used hybrid wood and metal structures.
By the 1915/16 timeframe, the German Luft-Fahrzeug-Gesellschaft firm had devised a fully monocoque all-wood structure with only a skeletal internal frame, using strips of plywood laboriously "wrapped" in a diagonal fashion in up to four layers, around concrete male molds in "left" and "right" halves, known as Wickelrumpf (wrapped-body) construction - this first appeared on the 1916 LFG Roland C.II, and would later be licensed to Pfalz Flugzeugwerke for its D-series biplane fighters.
In 1916 the German Albatros D.III biplane fighters featured semi-monocoque fuselages with load-bearing plywood skin panels glued to longitudinal longerons and bulkheads; it was replaced by the prevalent stressed skin structural configuration as metal replaced wood. Similar methods to the Albatros firm's concept were used by both Hannoversche Waggonfabrik for their light two-seat CL.II through CL.V designs, and by Siemens-Schuckert for their later Siemens-Schuckert D.III and higher-performance D.IV biplane fighter designs. The Albatros D.III construction was of much less complexity than the patented LFG Wickelrumpf concept for their outer skinning.
German engineer Hugo Junkers first flew all-metal airframes in 1915 with the all-metal, cantilever-wing, stressed-skin monoplane Junkers J 1 made of steel. It developed further with lighter weight duralumin, invented by Alfred Wilm in Germany before the war; in the airframe of the Junkers D.I of 1918, whose techniques were adopted almost unchanged after the war by both American engineer William Bushnell Stout and Soviet aerospace engineer Andrei Tupolev, proving to be useful for aircraft up to 60 meters in wingspan by the 1930s.
Between World wars
The J 1 of 1915, and the D.I fighter of 1918, were followed in 1919 by the first all-metal transport aircraft, the Junkers F.13 made of Duralumin as the D.I had been; 300 were built, along with the first four-engine, all-metal passenger aircraft, the sole Zeppelin-Staaken E-4/20. Commercial aircraft development during the 1920s and 1930s focused on monoplane designs using Radial engines. Some were produced as single copies or in small quantity such as the Spirit of St. Louis flown across the Atlantic by Charles Lindbergh in 1927. William Stout designed the all-metal Ford Trimotors in 1926.
The Hall XFH naval fighter prototype flown in 1929 was the first aircraft with a riveted metal fuselage : an aluminium skin over steel tubing, Hall also pioneered flush rivets and butt joints between skin panels in the Hall PH flying boat also flying in 1929. Based on the Italian Savoia-Marchetti S.56, the 1931 Budd BB-1 Pioneer experimental flying boat was constructed of corrosion-resistant stainless steel assembled with newly developed spot welding by U.S. railcar maker Budd Company.
The original Junkers corrugated duralumin-covered airframe philosophy culminated in the 1932-origin Junkers Ju 52 trimotor airliner, used throughout World War II by the Nazi German Luftwaffe for transport and paratroop needs. Andrei Tupolev's designs in Joseph Stalin's Soviet Union designed a series of all-metal aircraft of steadily increasing size culminating in the largest aircraft of its era, the eight-engined Tupolev ANT-20 in 1934, and Donald Douglas' firms developed the iconic Douglas DC-3 twin-engined airliner in 1936. They were among the most successful designs to emerge from the era through the use of all-metal airframes.
In 1937, the Lockheed XC-35 was specifically constructed with cabin pressurization to undergo extensive high-altitude flight tests, paving the way for the Boeing 307 Stratoliner, which would be the first aircraft with a pressurized cabin to enter commercial service.
Second World War
During World War II, military needs again dominated airframe designs. Among the best known were the US C-47 Skytrain, B-17 Flying Fortress, B-25 Mitchell and P-38 Lightning, and British Vickers Wellington that used a geodesic construction method, and Avro Lancaster, all revamps of original designs from the 1930s. The first jets were produced during the war but not made in large quantity.
Due to wartime scarcity of aluminium, the de Havilland Mosquito fighter-bomber was built from wood—plywood facings bonded to a balsawood core and formed using molds to produce monocoque structures, leading to the development of metal-to-metal bonding used later for the de Havilland Comet and Fokker F27 and F28.
Postwar
Postwar commercial airframe design focused on airliners, on turboprop engines, and then on jet engines. The generally higher speeds and tensile stresses of turboprops and jets were major challenges. Newly developed aluminium alloys with copper, magnesium and zinc were critical to these designs.
Flown in 1952 and designed to cruise at Mach 2 where skin friction required its heat resistance, the Douglas X-3 Stiletto was the first titanium aircraft but it was underpowered and barely supersonic; the Mach 3.2 Lockheed A-12 and SR-71 were also mainly titanium, as was the cancelled Boeing 2707 Mach 2.7 supersonic transport.
Because heat-resistant titanium is hard to weld and difficult to work with, welded nickel steel was used for the Mach 2.8 Mikoyan-Gurevich MiG-25 fighter, first flown in 1964; and the Mach 3.1 North American XB-70 Valkyrie used brazed stainless steel honeycomb panels and titanium but was cancelled by the time it flew in 1964.
A computer-aided design system was developed in 1969 for the McDonnell Douglas F-15 Eagle, which first flew in 1974 alongside the Grumman F-14 Tomcat and both used boron fiber composites in the tails; less expensive carbon fiber reinforced polymer were used for wing skins on the McDonnell Douglas AV-8B Harrier II, F/A-18 Hornet and Northrop Grumman B-2 Spirit.
Modern era
The vertical stabilizer of the Airbus A310-300, first flown in 1985, was the first carbon-fiber primary structure used in a commercial aircraft; composites are increasingly used since in Airbus airliners: the horizontal stabilizer of the A320 in 1987 and A330/A340 in 1994, and the center wing-box and aft fuselage of the A380 in 2005.
The Cirrus SR20, type certificated in 1998, was the first widely produced general aviation aircraft manufactured with all-composite construction, followed by several other light aircraft in the 2000s.
The Boeing 787, first flown in 2009, was the first commercial aircraft with 50% of its structure weight made of carbon-fiber composites, along with 20% aluminium and 15% titanium: the material allows for a lower-drag, higher wing aspect ratio and higher cabin pressurization; the competing Airbus A350, flown in 2013, is 53% carbon-fiber by structure weight. It has a one-piece carbon fiber fuselage, said to replace "1,200 sheets of aluminium and 40,000 rivets."
The 2013 Bombardier CSeries have a dry-fiber resin transfer infusion wing with a lightweight aluminium-lithium alloy fuselage for damage resistance and repairability, a combination which could be used for future narrow-body aircraft. In 2016, the Cirrus Vision SF50 became the first certified light jet made entirely from carbon-fiber composites.
In February 2017, Airbus installed a 3D printing machine for titanium aircraft structural parts using electron beam additive manufacturing from Sciaky, Inc.
Safety
Airframe production has become an exacting process. Manufacturers operate under strict quality control and government regulations. Departures from established standards become objects of major concern.
A landmark in aeronautical design, the world's first jet airliner, the de Havilland Comet, first flew in 1949. Early models suffered from catastrophic airframe metal fatigue, causing a series of widely publicised accidents. The Royal Aircraft Establishment investigation at Farnborough Airport founded the science of aircraft crash reconstruction. After 3000 pressurisation cycles in a specially constructed pressure chamber, airframe failure was found to be due to stress concentration, a consequence of the square shaped windows. The windows had been engineered to be glued and riveted, but had been punch riveted only. Unlike drill riveting, the imperfect nature of the hole created by punch riveting may cause the start of fatigue cracks around the rivet.
The Lockheed L-188 Electra turboprop, first flown in 1957 became a costly lesson in controlling oscillation and planning around metal fatigue. Its 1959 crash of Braniff Flight 542 showed the difficulties that the airframe industry and its airline customers can experience when adopting new technology.
The incident bears comparison with the Airbus A300 crash on takeoff of the American Airlines Flight 587 in 2001, after its vertical stabilizer broke away from the fuselage, called attention to operation, maintenance and design issues involving composite materials that are used in many recent airframes. The A300 had experienced other structural problems but none of this magnitude.
Alloys for airframe components
As the twentieth century progressed, aluminum became an essential metal in aircraft. The cylinder block of the engine that powered the Wright brothers’ plane at Kitty Hawk in 1903 was a one-piece casting in an aluminum alloy containing 8% copper; aluminum propeller blades appeared as early as 1907; and aluminum covers, seats, cowlings, cast brackets, and similar parts were common by the beginning of the First World War. In 1916, L. Brequet designed a reconnaissance bomber that marked the initial use of aluminum in the working structure of an airplane. By war’s end, the Allies and Germany employed aluminum alloys for the structural framework of fuselage and wing assemblies.
The aircraft airframe has been the most demanding application for aluminum alloys; to chronicle the development of the high-strength alloys is also to record the development of airframes. Duralumin, the first high-strength, heat treatable aluminum alloy, was employed initially for the framework of rigid airships, by Germany and the Allies during World War I. Duralumin was an aluminum-copper-magnesium alloy; it was originated in Germany and developed in the United States as Alloy 17S-T (2017-T4). It was utilized primarily as sheet and plate.
Alloy 7075-T6 (70,000-psi yield strength), an Al-Zn-Mg-Cu alloy, was introduced in 1943. Since then, most aircraft structures have been specified in alloys of this type. The first aircraft designed in 7075-T6 was the Navy’s P2V patrol bomber. A higher-strength alloy in the same series, 7178-T6 (78,000-psi yield strength), was developed in 1951; it has not generally displaced 7075-T6, which has superior fracture toughness.
Alloy 7178-T6 is used primarily in structural members where performance is critical under compressive loading.
Alloy 7079-T6 was introduced in the United States in 1954. In forged sections over 3 in. thick, it provides higher strength and greater transverse ductility than 7075-T6. It now is available in sheet, plate, extrusions, and forgings.
Alloy X7080-T7, with higher resistance to stress corrosion than 7079-T6, is being developed for thick parts. Because it is relatively insensitive to quenching rate, good strengths with low quenching stresses can be produced in thick sections.
Cladding of aluminum alloys was developed initially to increase the corrosion resistance of 2017-T4 sheet and thus to reduce aluminum aircraft maintenance requirements. The coating on 2017 sheet - and later on 2024-T3 - consisted of commercial-purity aluminum metallurgically bonded to one or both surfaces of the sheet.
Electrolytic protection, present under wet or moist conditions, is based on the appreciably higher electrode potential of commercial-purity aluminum compared to alloy 2017 or 2024 in the T3 or T4 temper. When 7075-T6 and other Al-Zn-Mg-Cu alloys appeared, an aluminum-zinc cladding alloy 7072 was developed to provide a relative electrode potential sufficient to protect the new strong alloys.
However, the high-performance aircraft designed since 1945 have made extensive use of skin structures machined from thick plate and extrusions, precluding the use of alclad exterior skins. Maintenance requirements increased as a result, and these stimulated research and development programs seeking higher-strength alloys with improved resistance to corrosion without cladding.
Aluminum alloy castings traditionally have been used in nonstructural airplane hardware, such as pulley brackets, quadrants, doublers, clips and ducts. They also have been employed extensively in complex valve bodies of hydraulic control systems. The philosophy of some aircraft manufacturers still is to specify castings only in places where failure of the part cannot cause loss of the airplane. Redundancy in cable and hydraulic control systems permits the use of castings.
Casting technology has made great advances in the last decade. Time-honored alloys such as 355 and 356 have been modified to produce higher levels of strength and ductility. New alloys such as 354, A356, A357, 359 and Tens 50 were developed for premium-strength castings. The high strength is accompanied by enhanced structural integrity and performance reliability.
Electric resistance spot and seam welding are used to join secondary structures, such as fairings, engine cowls, and doublers, to bulkheads and skins. Difficulties in quality control have resulted in low utilization of electric resistance welding for primary structure.
Ultrasonic welding offers some economic and quality-control advantages for production joining, particularly for thin sheet. However, the method has not yet been developed extensively in the aerospace industry.
Adhesive bonding is a common method of joining in both primary and secondary structures. Its selection is dependent on the design philosophy of the aircraft manufacturer. It has proven satisfactory in attaching stiffeners, such as hat sections to sheet, and face sheets to honeycomb cores. Also, adhesive bonding has withstood adverse exposures such as sea-water immersion and atmospheres.
Fusion welded aluminum primary structures in airplanes are virtually nonexistent, because the high-strength alloys utilized have low weldability and low weld-joint efficiencies. Some of the alloys, such as 2024-T4, also have their corrosion resistance lowered in the heat-affected zone if left in the as-welded condition.
The improved welding processes and higher-strength weldable alloys developed during the past decade offer new possibilities for welded primary structures. For example, the weldability and strength of alloys 2219 and 7039, and the brazeability and strength of X7005, open new avenues for design and manufacture of aircraft structures.
Light aircraft
Light aircraft have airframes primarily of all-aluminum semi-monocoque construction, however, a few light planes have tubular truss load-carrying construction with fabric or aluminum skin, or both.
Aluminum skin is normally of the minimum practical thickness: 0.015 to 0.025 in. Although design strength requirements are relatively low, the skin needs moderately high yield strength and hardness to minimize ground damage from stones, debris, mechanics’ tools, and general handling. Other primary factors involved in selecting an alloy for this application are corrosion resistance, cost, and appearance. Alloys 6061-T6 and alclad 2024-T3 are the primary choices.
Skin sheet on light airplanes of recent design and construction generally is alclad 2024-T3. The internal structure comprises stringers, spars, bulkheads, chord members, and various attaching fittings made of aluminum extrusions, formed sheet, forgings, and castings.
The alloys most used for extruded members are 2024-T4 for sections less than 0.125 in. thick and for general application, and 2014-T6 for thicker, more highly stressed sections. Alloy 6061-T6 has considerable application for extrusions requiring thin sections and excellent corrosion resistance. Alloy 2014-T6 is the primary forging alloy, especially for landing gear and hydraulic cylinders. Alloy 6061-T6 and its forging counterpart 6151-T6 often are utilized in miscellaneous fittings for reasons of economy and increased corrosion performance, when the parts are not highly stressed.
Alloys 356-T6 and A356-T6 are the primary casting alloys employed for brackets, bellcranks, pulleys, and various fittings. Wheels are produced in these alloys as permanent mold or sand castings. Die castings in alloy A380 also are satisfactory for wheels for light aircraft.
For low-stressed structure in light aircraft, alloys 3003-H12, H14, and H16; 5052-O, H32, H34, and H36; and 6061-T4 and T6 are sometimes employed. These alloys are also primary selections for fuel, lubricating oil, and hydraulic oil tanks, piping, and instrument tubing and brackets, especially where welding is required. Alloys 3003, 6061, and 6951 are utilized extensively in brazed heat exchangers and hydraulic accessories. Recently developed alloys, such as 5086, 5454, 5456, 6070, and the new weldable aluminum-magnesium-zinc alloys, offer strength advantages over those previously mentioned.
Sheet assembly of light aircraft is accomplished predominantly with rivets of alloys 2017-T4, 2117-T4, or 2024-T4. Self-tapping sheet metal screws are available in aluminum alloys, but cadmium-plated steel screws are employed more commonly to obtain higher shear strength and driveability. Alloy 2024-T4 with an anodic coating is standard for aluminum screws, bolts, and nuts made to military specifications. Alloy 6262-T9, however, is superior for nuts, because of its virtual immunity to stress-corrosion cracking.
| Technology | Aircraft components | null |
463601 | https://en.wikipedia.org/wiki/Yttrium%20barium%20copper%20oxide | Yttrium barium copper oxide | Yttrium barium copper oxide (YBCO) is a family of crystalline chemical compounds that display high-temperature superconductivity; it includes the first material ever discovered to become superconducting above the boiling point of liquid nitrogen [] at about .
Many YBCO compounds have the general formula (also known as Y123), although materials with other Y:Ba:Cu ratios exist, such as (Y124) or (Y247). At present, there is no singularly recognised theory for high-temperature superconductivity.
It is part of the more general group of rare-earth barium copper oxides (ReBCO) in which, instead of yttrium, other rare earths are present.
History
In April 1986, Georg Bednorz and Karl Müller, working at IBM in Zurich, discovered that certain semiconducting oxides became superconducting at relatively high temperature, in particular, a lanthanum barium copper oxide becomes superconducting at 35 K. This oxide was an oxygen-deficient perovskite-related material that proved promising and stimulated the search for related compounds with higher superconducting transition temperatures. In 1987, Bednorz and Müller were jointly awarded the Nobel Prize in Physics for this work.
Following Bednorz and Müller's discovery, a team led by Paul Ching Wu Chu at the University of Alabama in Huntsville and University of Houston discovered that YBCO has a superconducting transition critical temperature (Tc) of 93 K. The first samples were Y1.2Ba0.8CuO4, but this was an average composition for two phases, a black and a green one. Workers at Bell Laboratories identified the black phase as the superconductor, determined its composition YBa2Cu3O7−δ and synthesized it in single phase
YBCO was the first material found to become superconducting above 77 K, the boiling point of liquid nitrogen, whereas the majority of other superconductors require more expensive cryogens. Nonetheless, YBCO and its many related materials have yet to displace superconductors requiring liquid helium for cooling.
Synthesis
Relatively pure YBCO was first synthesized by heating a mixture of the metal carbonates at temperatures between 1000 and 1300 K.
4 BaCO3 + Y2(CO3)3 + 6 CuCO3 + (−x) O2 → 2 YBa2Cu3O7−x + 13 CO2
Modern syntheses of YBCO use the corresponding oxides and nitrates.
The superconducting properties of YBa2Cu3O7−x are sensitive to the value of x, its oxygen content. Only those materials with are superconducting below Tc, and when , the material superconducts at the highest temperature of , or in highest magnetic fields: for B perpendicular and for B parallel to the CuO2 planes.
In addition to being sensitive to the stoichiometry of oxygen, the properties of YBCO are influenced by the crystallization methods used. Care must be taken to sinter YBCO. YBCO is a crystalline material, and the best superconductive properties are obtained when crystal grain boundaries are aligned by careful control of annealing and quenching temperature rates.
Numerous other methods to synthesize YBCO have developed since its discovery by Wu and his co-workers, such as chemical vapor deposition (CVD), sol-gel, and aerosol methods. These alternative methods, however, still require careful sintering to produce a quality product.
However, new possibilities have been opened since the discovery that trifluoroacetic acid (TFA), a source of fluorine, prevents the formation of the undesired barium carbonate (BaCO3). Routes such as CSD (chemical solution deposition) have opened a wide range of possibilities, particularly in the preparation of long YBCO tapes. This route lowers the temperature necessary to get the correct phase to around . This, and the lack of dependence on vacuum, makes this method a very promising way to get scalable YBCO tapes.
Structure
YBCO crystallizes in a defect perovskite structure consisting of layers. The boundary of each layer is defined by planes of square planar CuO4 units sharing 4 vertices. The planes can sometimes be slightly puckered. Perpendicular to these CuO4 planes are CuO2 ribbons sharing 2 vertices. The yttrium atoms are found between the CuO4 planes, while the barium atoms are found between the CuO2 ribbons and the CuO4 planes. This structural feature is illustrated in the figure to the right.
Although YBa2Cu3O7 is a well-defined chemical compound with a specific structure and stoichiometry, materials with fewer than seven oxygen atoms per formula unit are non-stoichiometric compounds. The structure of these materials depends on the oxygen content. This non-stoichiometry is denoted by the x in the chemical formula YBa2Cu3O7−x. When x = 1, the O(1) sites in the Cu(1) layer (as labelled in the unit cell) are vacant and the structure is tetragonal. The tetragonal form of YBCO is insulating and does not superconduct. Increasing the oxygen content slightly causes more of the O(1) sites to become occupied. For x < 0.65, Cu-O chains along the b axis of the crystal are formed. Elongation of the b axis changes the structure to orthorhombic, with lattice parameters of a = 3.82, b = 3.89, and c = 11.68 Å. Optimum superconducting properties occur when x ~ 0.07, i.e., almost all of the O(1) sites are occupied, with few vacancies.
In experiments where other elements are substituted on the Cu and Ba sites, evidence has shown that conduction occurs in the Cu(2)O planes while the Cu(1)O(1) chains act as charge reservoirs, which provide carriers to the CuO planes. However, this model fails to address superconductivity in the homologue Pr123 (praseodymium instead of yttrium). This (conduction in the copper planes) confines conductivity to the a-b planes and a large anisotropy in transport properties is observed. Along the c axis, normal conductivity is 10 times smaller than in the a-b plane. For other cuprates in the same general class, the anisotropy is even greater and inter-plane transport is highly restricted.
Furthermore, the superconducting length scales show similar anisotropy, in both penetration depth (λab ≈ 150 nm, λc ≈ 800 nm) and coherence length, (ξab ≈ 2 nm, ξc ≈ 0.4 nm). Although the coherence length in the a-b plane is 5 times greater than that along the c axis it is quite small compared to classic superconductors such as niobium (where ξ ≈ 40 nm). This modest coherence length means that the superconducting state is more susceptible to local disruptions from interfaces or defects on the order of a single unit cell, such as the boundary between twinned crystal domains. This sensitivity to small defects complicates fabricating devices with YBCO, and the material is also sensitive to degradation from humidity.
Proposed applications
Many possible applications of this and related high temperature superconducting materials have been discussed. For example, superconducting materials are finding use as magnets in magnetic resonance imaging, magnetic levitation, and Josephson junctions. (The most used material for power cables and magnets is BSCCO.)
YBCO has yet to be used in many applications involving superconductors for two primary reasons:
First, although single crystals of YBCO have a very high critical current density, polycrystals have a very low critical current density: only a small current can be passed while maintaining superconductivity. This problem is due to crystal grain boundaries in the material. When the grain boundary angle is greater than about 5°, the supercurrent cannot cross the boundary. The grain boundary problem can be controlled to some extent by preparing thin films via CVD or by texturing the material to align the grain boundaries.
A second problem limiting the use of this material in technological applications is associated with processing of the material. Oxide materials such as this are brittle, and forming them into superconducting wires by any conventional process does not produce a useful superconductor. (Unlike BSCCO, the powder-in-tube process does not give good results with YBCO.)
The most promising method developed to utilize this material involves deposition of YBCO on flexible metal tapes coated with buffering metal oxides. This is known as . Texture (crystal plane alignment) can be introduced into the metal tape (the RABiTS process) or a textured ceramic buffer layer can be deposited, with the aid of an ion beam, on an untextured alloy substrate (the IBAD process). Subsequent oxide layers prevent diffusion of the metal from the tape into the superconductor while transferring the template for texturing the superconducting layer. Novel variants on CVD, PVD, and solution deposition techniques are used to produce long lengths of the final YBCO layer at high rates. Companies pursuing these processes include American Superconductor, Superpower (a division of Furukawa Electric), Sumitomo, Fujikura, Nexans Superconductors, Commonwealth Fusion Systems, and European Advanced Superconductors. A much larger number of research institutes have also produced YBCO tape by these methods.
The superconducting tape is used for SPARC, a tokamak fusion reactor design that can achieve breakeven energy production.
Surface modification
Surface modification of materials has often led to new and improved properties. Corrosion inhibition, polymer adhesion and nucleation, preparation of organic superconductor/insulator/high-Tc superconductor trilayer structures, and the fabrication of metal/insulator/superconductor tunnel junctions have been developed using surface-modified YBCO.
These molecular layered materials are synthesized using cyclic voltammetry. Thus far, YBCO layered with alkylamines, arylamines, and thiols have been produced with varying stability of the molecular layer. It has been proposed that amines act as Lewis bases and bind to Lewis acidic Cu surface sites in YBa2Cu3O7 to form stable coordination bonds.
Mass production
In 1987, shortly after it was discovered, physicist and science author Paul Grant published in the U.K. Journal New Scientist a straightforward guide for synthesizing YBCO superconductors using widely-available equipment. Thanks in part to this article and similar publications at the time, YBCO has become a popular high-temperature superconductor for use by hobbyists and in education, as the magnetic levitation effect can be easily demonstrated using liquid nitrogen as coolant.
In 2021, SuperOx, a Russian and Japanese company, developed a new manufacturing process for making YBCO wire for fusion reactors. This new wire was shown to conduct between 700 and 2000 Amps per square millimeter. The company was able to produce 186 miles of wire in 9 months, between 2019 and 2021, dramatically improving the production capacity. The company used a plasma-laser deposition process, on a electropolished substrate to make 12-mm width tape and then slit it into 3-mm tape.
| Physical sciences | Ceramic compounds | Chemistry |
463733 | https://en.wikipedia.org/wiki/Fiddler%20crab | Fiddler crab | The fiddler crab or calling crab can be one of the hundred species of semiterrestrial marine crabs in the family Ocypodidae. These crabs are well known for their extreme sexual dimorphism, where the male crabs have a major claw significantly larger than their minor claw, whilst females claws are both the same size. The name fiddler crab comes from the appearance of their small and large claw together, looking similar to a fiddle.
A smaller number of ghost crab and mangrove crab species are also found in the family Ocypodidae. This entire group is composed of small crabs, the largest being Afruca tangeri which is slightly over two inches (5 cm) across. Fiddler crabs are found along sea beaches and brackish intertidal mud flats, lagoons, swamps, and various other types of brackish or salt-water wetlands. Whilst fiddler crabs are currently split into two subfamilies of Gelasiminae and Ucinae, there is still phylogenetic and taxonomical debate as to whether the movement from the overall genus of ‘’Uca’’ to these subfamilies and the separate 11 genera
Like all crabs, fiddler crabs shed their shells as they grow. If they have lost legs or claws during their present growth cycle, a new one will be present when they molt. If the major claw is lost, males will regenerate one on the same side after their next molt. Newly molted crabs are very vulnerable because of their soft shells. They are reclusive and hide until the new shell hardens.
In a controlled laboratory setting, fiddler crabs exhibit a constant circadian rhythm that mimics the ebb and flow of the tides: they turn dark during the day and light at night.
Ecology and life cycle
Fiddler crabs primarily exist upon mudflats, sandy or muddy beaches as well as salt marshes within mangroves. Fiddler crabs are found in West Africa, the Western Atlantic, the Eastern Pacific, Indo-Pacific and Algarve region of Portugal.
Whilst the fiddler crab is classified as an omnivore, it does present itself as an opportunist and will consume anything with nutritional value. The crab will feed through bringing a chunk of sediment to its mouth and sifting through it to extract organic material. This crab will filter out algae, microbes, fungus or any form of detritus. Once finished consuming all the organic matter from the sediment, these crabs will then deposit them as small sand balls near their burrow.
Fiddler crabs are thought to potentially act as ecosystem engineers within their habitat due to the way they rework the sediment during feeding. Whilst these crabs do rework the sediment around them, upturning the very top layer and depositing it nearby, there is still debate that exists as to whether this turnover of sediment has any proven difference regarding nutrients and aeration of the sediment.
Fiddler crabs are a burrowing species, where within their territory they may possess several burrows. There are two types of burrows that the fiddler crabs can build, either breeding burrows or temporary burrows. Temporary burrows are constructed by both males and females during high tide periods. These burrows are also constructed at night time when the crabs are no longer feeding and are hiding from predators. Breeding burrows are constructed by solely males, and will be constructed within the area that they have deemed their territory. These breeding burrows are constructed by male crabs so that the female and male crabs may copulate within the burrow, and the female may deposit and incubate her eggs within this area. Larger males who can more easily defend their territory will often have multiple suitable breeding burrows within their territory to enable them to mate with multiple female crabs. Female crabs are found to prefer to mate with males that have the widest burrows, however, carapace width and claw size does correlate with the width of the burrow, so could be a potential size bias.
Two types of fiddler crabs are found to exist within a given territory, a wandering female or male, and territory-holding male or females. When in a wandering state, this means crabs do not currently occupy a burrow. They will wander in order to look for territory which contains a burrow, or to look for a mate. Wandering females will look for a mate to copulate with, usually preferring to mate with a male that currently possesses a burrow. The female fiddler carries her eggs in a mass on the underside of her body. She remains in her burrow during a two-week gestation period, after which she ventures out to release her eggs into the receding tide. The larvae remain planktonic for a further two weeks.
The mating system of fiddler crabs is thought to be mainly polygynous, where the male crabs will mate with multiple females if they have the opportunity to, however, female fiddler crabs such as the Austruca lactea are known to also mate with multiple males.
As they are a species of crustacean, they perform ecdysis, which is the process of moulting. When crabs moult, they produce hormones which trigger the shedding of their exoskeleton and regeneration of limbs. Moulting is already an extremely stressful time for fiddler crabs, as their shell becomes extremely soft, leaving them vulnerable to predation. When undergoing this moulting cycle, crabs will frequently hide within their burrows to avoid harm. When male crabs are undergoing the moulting process, if they are exposed to other male crabs in high grouping with consistent light, their ability to regenerate limbs will be impaired.
Whilst the crabs major claw does function as a tool for fighting and competition, it also plays a role in thermoregulation. As the claw is so large, and these crabs live in generally hot territory, so require strategies to keep themselves cool, particularly for wandering males without burrows. The presence of the major claw upon the male helps them keep their body temperature regulated, and decreases the chance of them losing or gaining too much heat in a given time period. The large claw draws away excess body heat from the core of the fiddler crab and allows it to dissipate. Heat is found to dissipate significantly faster when male crabs are performing waving at the same time.
Fiddler crabs come in many different colourations and patterns, and are known to be able to change their colour over time. Fiddler crabs such as the Tubuca capricornis are capable of changing their colour rapidly when placed under significant stress. When fiddler crabs undergo moulting, they are seen to have reduced colouration after each sequential moult. Female fiddler crabs are traditionally more colourful than male fiddler crabs. Conspicuous colouring in fiddler crabs is dangerous as it increases predation rate, however, sexual selection argues for brightly coloured crabs. Fiddler crabs have finely tuned visual systems that aid in detecting colours of importance, which aid in selecting coloured mates. When given the choice, females prefer to pick males that are more brightly coloured in comparison to dull males.
Behaviour, competition, and courtship
Fiddler crabs live rather brief lives of no more than two years (up to three years in captivity). Male fiddler crabs use many signalling techniques and performances towards females to win over a female to mate. Females choose their mate based on claw size and also quality of the waving display.
It is very common for male fiddler crabs to be viewed fighting against one another. Male fiddler crabs fight primarily over females and territory. Whilst fights within fiddler crabs are commonly male against male fights, male fiddler crabs will also fight against female fiddler crabs when there is suitable territory with a burrow that the male wishes to obtain. When fighting, male fiddler crabs can often have their major claw ripped off, or have it harmed to the point where male fiddler crabs must autotomize this claw. Whilst this claw can regrow when the crab next moults, the properties of the claw will not be the same as they were previously. Whilst the size of the claw will be the same or similar to how it was before, the claw will become significantly weaker. Whilst this claw is now significantly weaker, other crabs cannot tell that this claw is weaker, so will assume the claw is at full size and strength. This is a form of dishonest signalling, where the appearance of the claw displayed to other fiddler crabs does not represent the true mechanics of the claw.
In order for a male fiddler crab to help produce offspring, he must first attract a mate and convince her to mate with him. To win over females, male crabs will performed a waving display towards females. This waving display constitutes of raising the major claw upwards and then dropping it down towards itself in what appears as a 'come here' motion, like a beckoning sign. Male crabs will exhibit two forms of waving towards females to attempt to court them. Broadcast waving is a general wave the male crabs perform when a female crab is not within their field of view. This wave is at a slower pace, as to not use up energy reserves. Directed waving is performed by male crabs when they have spotted a female they wish to mate with. This wave is performed through the male crab facing towards the female, and increasing the pace of the wave towards the female.
When males are waving at females, this is usually done in synchrony with other male crabs in the neighbouring area. Synchronous waving does provide a general positive benefit for male crabs attempting to attract wandering females, as a form of cooperative behaviour. Synchrony however, does not provide an individual benefit, as females prefer to mate with the male that is leading the synchronous wave. Therefore, synchronous waving is thought to have evolved as an incidental byproduct of males competing to lead the wave.
Fiddler crabs are also known to build sedimentary pillars around their burrows out of mud and sand. 49 of the total species under the family Ocypodidae will construct sedimentary pillars outside of their burrows for the purposes of courtship and defense from other crabs. These structures can be built by either male or female crabs and will be one of the six known structures constructed by fiddler crabs. Fiddler crabs can build either a chimney, hood, pillar, semidome, mudball or rim. These mud pillars have correlations with sediment type, genus and sex. Females are more likely to be attracted to a male if he has a sedimentary pillar outside of his burrow in comparison to a male crab without a pillar. When females are not actively being courted, they are more likely to move to an empty burrow which has a pillar present in comparison to an empty burrow without a pillar present. Fiddler crabs with any hood or dome formed pillar above their burrow are more likely to be shy crabs that take less risks.
Female crabs will choose their mate based upon the claw size of the male, as well as the quality of the waving display, if he was the leader of the synchronous waving, and if the male currently possesses territory with a burrow for them to copulate within. Females will also prefer to mate with males who have the widest and largest burrows.
Fiddler crabs such as Austruca mjoebergi have been shown to bluff about their fighting ability. Upon regrowing a lost claw, a crab will occasionally regrow a weaker claw that nevertheless intimidates crabs with smaller but stronger claws. This is an example of dishonest signalling.
The dual functionality of the major claw of fiddler crabs has presented an evolutionary conundrum in that the claw mechanics best suited for fighting do not match up with the mechanics best suited for a waving display.
Genera and species
More than 100 species of fiddler crabs make up 11 of the 13 genera in the crab family Ocypodidae. These were formerly members of the genus Uca. In 2016, most of the subgenera of Uca were elevated to genus rank, and the fiddler crabs now occupy 11 genera making up the subfamilies Gelasiminae and Ucinae.
Afruca
Afruca tangeri (Eydoux, 1835) (West African fiddler crab)
Austruca
Austruca albimana (Kossmann, 1877) (white-handed fiddler crab)
Austruca annulipes (H.Milne Edwards, 1837) (ring-legged fiddler crab)
Austruca bengali (bengal fiddler crab)
Austruca citrus (citrus fiddler crab)
Austruca cryptica (Naderloo, Türkay & Chen, 2010) (cryptic fiddler crab)
Austruca iranica (Pretzmann, 1971) (iranian fiddler crab)
Austruca lactea (De Haan, 1835) (milky fiddler crab)
Austruca mjoebergi (Rathbun, 1924) (banana fiddler crab)
Austruca occidentalis (Naderloo, Schubart & Shih, 2016) (East African fiddler crab)
Austruca perplexa (H.Milne Edwards, 1852) (perplexing fiddler crab)
Austruca sindensis (Alcock, 1900) (indus fiddler crab)
Austruca triangularis (A.Milne-Edwards, 1873) (triangular fiddler crab)
Austruca variegata (Heller, 1862) (motley fiddler crab)
Cranuca
Cranuca inversa (Hoffmann, 1874)
Gelasimus
Gelasimus borealis (Crane, 1975) (northern calling fiddler crab)
Gelasimus dampieri (Crane, 1975) (dampier's fiddler crab)
Gelasimus excisa (eastern calling fiddler crab)
Gelasimus hesperiae (Crane, 1975) (western calling fiddler crab)
Gelasimus jocelynae (Shih, Naruse & Ng, 2010) (jocelyn's fiddler crab)
Gelasimus neocultrimanus (Bott, 1973)
Gelasimus palustris Stimpson, 1862
Gelasimus pugilator Stimpson, 1862
Gelasimus rubripes Hombron & Jacquinot, 1846
Gelasimus subeylindricus Stimpson, 1862
Gelasimus tetragonon (Herbst, 1790) (tetragonal fiddler crab)
Gelasimus vocans (Linnaeus, 1758) (calling fiddler crab)
Gelasimus vomeris (McNeill, 1920) (orange-clawed fiddler crab)
Leptuca
Leptuca batuenta (Crane, 1941) (beating fiddler crab)
Leptuca beebei (Crane, 1941) (Beebe's fiddler crab)
Leptuca coloradensis (Rathbun, 1893) (painted fiddler crab)
Leptuca crenulata (Lockington, 1877) (Mexican fiddler crab)
Leptuca cumulanta (Crane, 1943) (heaping fiddler crab)
Leptuca deichmanni (Rathbun, 1935) (Deichmann's fiddler crab)
Leptuca dorotheae (von Hagen, 1968) (Dorothy's fiddler crab)
Leptuca festae (Nobili, 1902) (Festa's fiddler crab)
Leptuca helleri (Rathbun, 1902) (Heller's fiddler crab)
Leptuca inaequalis (Rathbun, 1935) (uneven fiddler crab)
Leptuca latimanus (Rathbun, 1893) (lateral-handed fiddler crab)
Leptuca leptodactyla (Rathbun, 1898) (thin-fingered fiddler crab)
Leptuca limicola (Crane, 1941) (Pacific mud fiddler crab)
Leptuca musica (Rathbun, 1914) (musical fiddler crab)
Leptuca oerstedi (Rathbun, 1904) (aqua fiddler crab)
Leptuca panacea (Novak & Salmon, 1974) (gulf sand fiddler crab)
Leptuca pugilator (Bosc, 1802) (Atlantic sand fiddler crab)
Leptuca pygmaea (Crane, 1941) (pygmy fiddler crab)
Leptuca saltitanta (Crane, 1941) (energetic fiddler crab)
Leptuca speciosa (Ives, 1891) (brilliant fiddler crab)
Leptuca spinicarpa (Rathbun, 1900) (spiny-wristed fiddler crab)
Leptuca stenodactylus (Milne-Edwards & Lucas, 1843) (narrow-fingered fiddler crab)
Leptuca subcylindrica (Stimpson, 1859) (Laguna Madre fiddler crab)
Leptuca tallanica (von Hagen, 1968) (Peruvian fiddler crab)
Leptuca tenuipedis (Crane, 1941) (slender-legged fiddler crab)
Leptuca terpsichores (Crane, 1941) (dancing fiddler crab)
Leptuca thayeri M. J. Rathbun, 1900 (Atlantic mangrove fiddler crab)
Leptuca tomentosa (Crane, 1941) (matted fiddler crab)
Leptuca umbratila (Crane, 1941) (Pacific mangrove fiddler crab)
Leptuca uruguayensis (Nobili, 1901) (Uruguayan fiddler crab)
Minuca
Minuca argillicola (Crane, 1941) (clay fiddler crab)
Minuca brevifrons (Stimpson, 1860) (narrow-fronted fiddler crab)
Minuca burgersi (Holthuis, 1967) (burger's fiddler crab)
Minuca ecuadoriensis (Maccagno, 1928) (Pacific hairback fiddler crab)
Minuca galapagensis (galápagos fiddler crab)
Minuca herradurensis (Bott, 1954) (la herradura fiddler crab)
Minuca longisignalis (Salmon & Atsaides, 1968) (longwave gulf fiddler)
Minuca marguerita (Thurman, 1981) (olmec fiddler crab)
Minuca minax (Le Conte, 1855) (red-jointed fiddler crab)
Minuca mordax (Smith, 1870) (biting fiddler crab)
Minuca osa (Landstorfer & Schubart, 2010) (osa fiddler crab)
Minuca pugnax (S. I. Smith, 1870) (Atlantic marsh fiddler crab)
Minuca rapax (Smith, 1870) (mudflat fiddler crab)
Minuca umbratila Crane, 1941 (Pacific mangrove fiddler crab)
Minuca victoriana (von Hagen, 1987) (victorian fiddler crab)
Minuca virens (Salmon & Atsaides, 1968) (green-banded fiddler crab)
Minuca vocator (Herbst, 1804) (Atlantic hairback fiddler crab)
Minuca zacae (Crane, 1941) (lesser Mexican fiddler crab)
Paraleptuca
Paraleptuca boninensis (Shih, Komai & Liu, 2013) (bonin islands fiddler crab)
Paraleptuca chlorophthalmus (H.Milne Edwards, 1837) (green-eyed fiddler crab)
Paraleptuca crassipes (White, 1847) (thick-legged fiddler crab)
Paraleptuca splendida (Stimpson, 1858) (splendid fiddler crab)
Petruca
Petruca panamensis Ng, Shih & Christy, 2015
Tubuca
Tubuca acuta (Stimpson, 1858) (acute fiddler crab)
Tubuca alcocki Shih, Chan & Ng, 2018 (alcock's fiddler crab)
Tubuca arcuata (De Haan, 1835) (bowed fiddler crab)
Tubuca australiae (Crane, 1975)
Tubuca bellator (White, 1847) (belligerent fiddler crab)
Tubuca capricornis (Crane, 1975) (capricorn fiddler crab)
Tubuca coarctata (H.Milne Edwards, 1852) (compressed fiddler crab)
Tubuca demani (Ortmann, 1897) (demanding fiddler crab)
Tubuca dussumieri (H.Milne Edwards, 1852) (dussumier's fiddler crab)
Tubuca elegans (George & Jones, 1982) (elegant fiddler crab)
Tubuca flammula (Crane, 1975) (flame-backed fiddler crab)
Tubuca forcipata (Adams & White, 1849) (forceps fiddler crab)
Tubuca hirsutimanus (George & Jones, 1982) (hairy-handed fiddler crab)
Tubuca longidigitum (Kingsley, 1880) (long-fingered fiddler crab)
Tubuca paradussumieri (Bott, 1973) (spined fiddler crab)
Tubuca polita (Crane, 1975) (polished fiddler crab)
Tubuca rhizophorae (Tweedie, 1950) (Asian mangrove fiddler crab)
Tubuca rosea (Tweedie, 1937) (rose fiddler crab)
Tubuca seismella (Crane, 1975) (shaking fiddler crab)
Tubuca signata (Hess, 1865) (signaling fiddler crab)
Tubuca typhoni (Crane, 1975) (typhoon fiddler crab)
Tubuca urvillei (H.Milne Edwards, 1852) (d'urville's fiddler crab)
Uca
†Uca antiqua Brito, 1972
Uca heteropleura (Smith, 1870) (American Red fiddler crab)
†Uca inaciobritoi Martins-Neto, 2001
Uca insignis (H.Milne Edwards, 1852) (distinguished fiddler crab)
Uca intermedia von Prahl & Toro, 1985 (intermediate fiddler crab)
Uca major Herbst, 1782 (greater fiddler crab)
†Uca marinae Dominguez-Alonso, 2008
Uca maracoani Latreille 1803 (Brazilian fiddler crab)
Uca monilifera Rathbun, 1914 (necklaced fiddler crab)
†Uca nitida Desmarest, 1822
†Uca oldroydi Rathbun, 1926
Uca ornata (Smith, 1870) (ornate fiddler crab)
Uca princeps (Smith, 1870) (large Mexican fiddler crab)
Uca stylifera (H.Milne Edwards, 1852) (styled fiddler crab)
Uca subcylindrica Stimpson, 1862 (Laguna Madre fiddler)
Xeruca
Xeruca formosensis (Rathbun, 1921)
Gallery
Captivity
Fiddler crabs are occasionally kept as pets. The fiddler crabs sold in pet stores generally come from brackish water lagoons. Because they live in lower salinity water, pet stores may call them fresh-water crabs, but they cannot survive indefinitely in fresh water. Fiddler crabs have been known to attack small fish in captivity, as opposed to their natural feeding habits.
| Biology and health sciences | Crabs and hermit crabs | Animals |
463734 | https://en.wikipedia.org/wiki/Public%20health | Public health | Public health is "the science and art of preventing disease, prolonging life and promoting health through the organized efforts and informed choices of society, organizations, public and private, communities and individuals". Analyzing the determinants of health of a population and the threats it faces is the basis for public health. The public can be as small as a handful of people or as large as a village or an entire city; in the case of a pandemic it may encompass several continents. The concept of health takes into account physical, psychological, and social well-being, among other factors.
Public health is an interdisciplinary field. For example, epidemiology, biostatistics, social sciences and management of health services are all relevant. Other important sub-fields include environmental health, community health, behavioral health, health economics, public policy, mental health, health education, health politics, occupational safety, disability, oral health, gender issues in health, and sexual and reproductive health. Public health, together with primary care, secondary care, and tertiary care, is part of a country's overall healthcare system. Public health is implemented through the surveillance of cases and health indicators, and through the promotion of healthy behaviors. Common public health initiatives include promotion of hand-washing and breastfeeding, delivery of vaccinations, promoting ventilation and improved air quality both indoors and outdoors, suicide prevention, smoking cessation, obesity education, increasing healthcare accessibility and distribution of condoms to control the spread of sexually transmitted diseases.
There is a significant disparity in access to health care and public health initiatives between developed countries and developing countries, as well as within developing countries. In developing countries, public health infrastructures are still forming. There may not be enough trained healthcare workers, monetary resources, or, in some cases, sufficient knowledge to provide even a basic level of medical care and disease prevention. A major public health concern in developing countries is poor maternal and child health, exacerbated by malnutrition and poverty coupled with governments' reluctance in implementing public health policies. Developed nations are at greater risk of certain public health crises, including childhood obesity, although overweight populations in low- and middle-income countries are catching up.
From the beginnings of human civilization, communities promoted health and fought disease at the population level. In complex, pre-industrialized societies, interventions designed to reduce health risks could be the initiative of different stakeholders, such as army generals, the clergy or rulers. Great Britain became a leader in the development of public health initiatives, beginning in the 19th century, due to the fact that it was the first modern urban nation worldwide. The public health initiatives that began to emerge initially focused on sanitation (for example, the Liverpool and London sewerage systems), control of infectious diseases (including vaccination and quarantine) and an evolving infrastructure of various sciences, e.g. statistics, microbiology, epidemiology, sciences of engineering.
Definition
Public health has been defined as "the science and art of preventing disease", prolonging life and improving quality of life through organized efforts and informed choices of society, organizations (public and private), communities and individuals. The public can be as small as a handful of people or as large as a village or an entire city. The concept of health takes into account physical, psychological, and social well-being. As such, according to the World Health Organization, "health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity".
Related terms
Public health is related to global health which is the health of populations in the worldwide context. It has been defined as "the area of study, research and practice that places a priority on improving health and achieving equity in "Health for all" people worldwide". International health is a field of health care, usually with a public health emphasis, dealing with health across regional or national boundaries. Public health is not the same as public healthcare (publicly funded health care).
The term preventive medicine is related to public health. The American Board of Preventive Medicine separates three categories of preventive medicine: aerospace health, occupational health, and public health and general preventative medicine. Jung, Boris and Lushniak argue that preventive medicine should be considered the medical specialty for public health but note that the American College of Preventive Medicine and American Board of Preventive Medicine do not prominently use the term "public health". Preventive medicine specialists are trained as clinicians and address complex health needs of a population such as by assessing the need for disease prevention programs, using the best methods to implement them, and assessing their effectiveness.
Since the 1990s many scholars in public health have been using the term population health. There are no medical specialties directly related to population health. Valles argues that consideration of health equity is a fundamental part of population health. Scholars such as Coggon and Pielke express concerns about bringing general issues of wealth distribution into population health. Pielke worries about "stealth issue advocacy" in population health. Jung, Boris and Lushniak consider population health to be a concept that is the goal of an activity called public health practiced through the specialty preventive medicine.
Lifestyle medicine uses individual lifestyle modification to prevent or revert disease and can be considered a component of preventive medicine and public health. It is implemented as part of primary care rather than a specialty in its own right. Valles argues that the term social medicine has a narrower and more biomedical focus than the term population health.
Purpose
The purpose of a public health intervention is to prevent and mitigate diseases, injuries, and other health conditions. The overall goal is to improve the health of individuals and populations, and to increase life expectancy.
Components
Public health is a complex term, composed of many elements and different practices. It is a multi-faceted, interdisciplinary field. For example, epidemiology, biostatistics, social sciences and management of health services are all relevant. Other important sub-fields include environmental health, community health, behavioral health, health economics, public policy, mental health, health education, health politics, occupational safety, disability, gender issues in health, and sexual and reproductive health.
Modern public health practice requires multidisciplinary teams of public health workers and professionals. Teams might include epidemiologists, biostatisticians, physician assistants, public health nurses, midwives, medical microbiologists, pharmacists, economists, sociologists, geneticists, data managers, environmental health officers (public health inspectors), bioethicists, gender experts, sexual and reproductive health specialists, physicians, and veterinarians.
The elements and priorities of public health have evolved over time, and are continuing to evolve. Common public health initiatives include promotion of hand-washing and breastfeeding, delivery of vaccinations, suicide prevention, smoking cessation, obesity education, increasing healthcare accessibility and distribution of condoms to control the spread of sexually transmitted diseases.
Methods
Public health aims are achieved through surveillance of cases and the promotion of healthy behaviors, communities and environments. Analyzing the determinants of health of a population and the threats it faces is the basis for public health.
Many diseases are preventable through simple, nonmedical methods. For example, research has shown that the simple act of handwashing with soap can prevent the spread of many contagious diseases. In other cases, treating a disease or controlling a pathogen can be vital to preventing its spread to others, either during an outbreak of infectious disease or through contamination of food or water supplies.
Public health, together with primary care, secondary care, and tertiary care, is part of a country's overall health care system. Many interventions of public health interest are delivered outside of health facilities, such as food safety surveillance, distribution of condoms and needle-exchange programs for the prevention of transmissible diseases.
Public health requires Geographic Information Systems (GIS) because risk, vulnerability and exposure involve geographic aspects.
Ethics
A dilemma in public health ethics is dealing with the conflict between individual rights and maximizing right to health. Public health is justified by consequentialist utilitarian ideas, but is constrained and critiqued by liberal, deontological, principlist and libertarian philosophies Stephen Holland argues that it can be easy to find a particular framework to justify any viewpoint on public health issues, but that the correct approach is to find a framework that best describes a situation and see what it implies about public health policy.
The definition of health is vague and there are many conceptualizations. Public health practitioners definition of health can different markedly from members of the public or clinicians. This can mean that members of the public view the values behind public health interventions as alien which can cause resentment amongst the public towards certain interventions. Such vagueness can be a problem for health promotion. Critics have argued that public health tends to place more focus on individual factors associated with health at the expense of factors operating at the population level.
Historically, public health campaigns have been criticized as a form of "healthism", as moralistic in nature rather than being focused on health. Medical doctors, Petr Shkrabanek and James McCormick wrote a series of publications on this topic in the late 1980s and early 1990s criticizing the UK's the Health of The Nation campaign. These publications exposed abuse of epidemiology and statistics by the public health movement to support lifestyle interventions and screening programs. A combination of inculcating a fear of ill-health and a strong notion of individual responsibility has been criticized as a form of "health fascism" by a number of scholars, objectifying the individual with no considerations of emotional or social factors.
Priority areas
Original focal areas
When public health initiatives began to emerge in England in modern times (18th century onwards) there were three core strands of public health which were all related to statecraft: Supply of clean water and sanitation (for example London sewerage system); control of infectious diseases (including vaccination and quarantine); an evolving infrastructure of various sciences, e.g. statistics, microbiology, epidemiology, sciences of engineering. Great Britain was a leader in the development of public health during that time period out of necessity: Great Britain was the first modern urban nation (by 1851 more than half of the population lived in settlements of more than 2000 people). This led to a certain type of distress which then led to public health initiatives. Later that particular concern faded away.
Changing focal areas and expanding scope
With the onset of the epidemiological transition and as the prevalence of infectious diseases decreased through the 20th century, public health began to put more focus on chronic diseases such as cancer and heart disease. Previous efforts in many developed countries had already led to dramatic reductions in the infant mortality rate using preventive methods. In Britain, the infant mortality rate fell from over 15% in 1870 to 7% by 1930.
A major public health concern in developing countries is poor maternal and child health, exacerbated by malnutrition and poverty. The WHO reports that a lack of exclusive breastfeeding during the first six months of life contributes to over a million avoidable child deaths each year.
Public health surveillance has led to the identification and prioritization of many public health issues facing the world today, including HIV/AIDS, diabetes, waterborne diseases, zoonotic diseases, and antibiotic resistance leading to the reemergence of infectious diseases such as tuberculosis. Antibiotic resistance, also known as drug resistance, was the theme of World Health Day 2011.
For example, the WHO reports that at least 220 million people worldwide have diabetes. Its incidence is increasing rapidly, and it is projected that the number of diabetes deaths will double by 2030. In a June 2010 editorial in the medical journal The Lancet, the authors opined that "The fact that type 2 diabetes, a largely preventable disorder, has reached epidemic proportion is a public health humiliation." The risk of type 2 diabetes is closely linked with the growing problem of obesity. The WHO's latest estimates highlighted that globally approximately 1.9 billion adults were overweight in 2014, and 41 million children under the age of five were overweight in 2014. Once considered a problem in high-income countries, it is now on the rise in low-income countries, especially in urban settings.
Many public health programs are increasingly dedicating attention and resources to the issue of obesity, with objectives to address the underlying causes including healthy diet and physical exercise. The National Institute for Health and Care Research (NIHR) has published a review of research on what local authorities can do to tackle obesity. The review covers interventions in the food environment (what people buy and eat), the built and natural environments, schools, and the community, as well as those focussing on active travel, leisure services and public sports, weight management programmes, and system-wide approaches.
Health inequalities, driven by the social determinants of health, are also a growing area of concern in public health. A central challenge to securing health equity is that the same social structures that contribute to health inequities also operate and are reproduced by public health organizations. In other words, public health organizations have evolved to better meet the needs of some groups more than others. The result is often that those most in need of preventative interventions are least likely to receive them and interventions can actually aggravate inequities as they are often inadvertently tailored to the needs of the normative group. Identifying bias within public health research and practice is essential to ensuring public health efforts mitigate and don't aggravate health inequities.
Organizations
World Health Organization (WHO)
The World Health Organization (WHO) is a specialized agency of the United Nations responsible for international public health. The WHO Constitution, which establishes the agency's governing structure and principles, states its main objective as "the attainment by all peoples of the highest possible level of health". The WHO's broad mandate includes advocating for universal healthcare, monitoring public health risks, coordinating responses to health emergencies, and promoting human health and well-being. The WHO has played a leading role in several public health achievements, most notably the eradication of smallpox, the near-eradication of polio, and the development of an Ebola vaccine. Its current priorities include communicable diseases, particularly HIV/AIDS, Ebola, COVID-19, malaria and tuberculosis; non-communicable diseases such as heart disease and cancer; healthy diet, nutrition, and food security; occupational health; and substance abuse.
Others
Most countries have their own governmental public health agency, often called the ministry of health, with responsibility for domestic health issues.
For example, in the United States, state and local health departments are on the front line of public health initiatives. In addition to their national duties, the United States Public Health Service (PHS), led by the Surgeon General of the United States Public Health Service, and the Centers for Disease Control and Prevention, headquartered in Atlanta, are also involved with international health activities.
Public health programs
Most governments recognize the importance of public health programs in reducing the incidence of disease, disability, and the effects of aging and other physical and mental health conditions. However, public health generally receives significantly less government funding compared with medicine. Although the collaboration of local health and government agencies is considered best practice to improve public health, the pieces of evidence available to support this is limited. Public health programs providing vaccinations have made major progress in promoting health, including substantially reducing the occurrence of cholera and polio and eradicating smallpox, diseases that have plagued humanity for thousands of years.
The World Health Organization (WHO) identifies core functions of public health programs including:
providing leadership on matters critical to health and engaging in partnerships where joint action is needed;
shaping a research agenda and stimulating the generation, translation and dissemination of valuable knowledge;
setting a norms
and standards and promoting and monitoring their implementation;
articulating ethical and evidence-based policy options;
monitoring the health situation and assessing health trends.
In particular, public health surveillance programs can:
serve as an early warning system for impending public health emergencies;
document the impact of an intervention, or track progress towards specified goals; and
monitor and clarify the epidemiology of health problems, allow priorities to be set, and inform health policy and strategies.
diagnose, investigate, and monitor health problems and health hazards of the community
The "Truth" campaign, launched by the American Legacy Foundation in 2000. This campaign aimed to educate and discourage young people from smoking by exposing the tobacco industry's deceptive practices. Through a combination of powerful visuals, persuasive storytelling, and relatable messaging, the "Truth" campaign successfully reduced smoking rates among teenagers and young adults.
Behavior change
Many health problems are due to maladaptive personal behaviors. From an evolutionary psychology perspective, over consumption of novel substances that are harmful is due to the activation of an evolved reward system for substances such as drugs, tobacco, alcohol, refined salt, fat, and carbohydrates. New technologies such as modern transportation also cause reduced physical activity. Research has found that behavior is more effectively changed by taking evolutionary motivations into consideration instead of only presenting information about health effects. The marketing industry has long known the importance of associating products with high status and attractiveness to others. Films are increasingly being recognized as a public health tool, with the Harvard University's T.H. Chan School of Public Health categorizing such films as "impact filmmaking." In fact, film festivals and competitions have been established to specifically promote films about health. Conversely, it has been argued that emphasizing the harmful and undesirable effects of tobacco smoking on other persons and imposing smoking bans in public places have been particularly effective in reducing tobacco smoking. Public libraries can also be beneficial tools for public health changes. They provide access to healthcare information, link people to healthcare services, and even can provide direct care in certain situations.
Applications in health care
As well as seeking to improve population health through the implementation of specific population-level interventions, public health contributes to medical care by identifying and assessing population needs for health care services, including:
Assessing current services and evaluating whether they are meeting the objectives of the health care system
Ascertaining requirements as expressed by health professionals, the public and other stakeholders
Identifying the most appropriate interventions
Considering the effect on resources for proposed interventions and assessing their cost-effectiveness
Supporting decision making in health care and planning health services including any necessary changes.
Informing, educating, and empowering people about health issues
Conflicting aims
Some programs and policies associated with public health promotion and prevention can be controversial. One such example is programs focusing on the prevention of HIV transmission through safe sex campaigns and needle-exchange programs. Another is the control of tobacco smoking. Many nations have implemented major initiatives to cut smoking, such as increased taxation and bans on smoking in some or all public places. Supporters argue by presenting evidence that smoking is one of the major killers, and that therefore governments have a duty to reduce the death rate, both through limiting passive (second-hand) smoking and by providing fewer opportunities for people to smoke. Opponents say that this undermines individual freedom and personal responsibility, and worry that the state may be encouraged to remove more and more choice in the name of better population health overall.
Psychological research confirms this tension between concerns about public health and concerns about personal liberty: (i) the best predictor of complying with public health recommendations such as hand-washing, mask-wearing, and staying at home (except for essential activity) during the COVID-19 pandemic was people's perceived duties to prevent harm but (ii) the best predictor of flouting such public health recommendations was valuing liberty more than equality.
Simultaneously, while communicable diseases have historically ranged uppermost as a global health priority, non-communicable diseases and the underlying behavior-related risk factors have been at the bottom. This is changing, however, as illustrated by the United Nations hosting its first General Assembly Special Summit on the issue of non-communicable diseases in September 2011.
Global perspectives
Disparities in service and access
There is a significant disparity in access to health care and public health initiatives between developed countries and developing countries, as well as within developing countries. In developing countries, public health infrastructures are still forming. There may not be enough trained health workers, monetary resources or, in some cases, sufficient knowledge to provide even a basic level of medical care and disease prevention. As a result, a large majority of disease and mortality in developing countries results from and contributes to extreme poverty. For example, many African governments spend less than $100 USD per person per year on health care, while, in the United States, the federal government spent approximately $10,600 USD per capita in 2019. However, expenditures on health care should not be confused with spending on public health. Public health measures may not generally be considered "health care" in the strictest sense. For example, mandating the use of seat belts in cars can save countless lives and contribute to the health of a population, but typically money spent enforcing this rule would not count as money spent on health care.
Large parts of the world remained plagued by largely preventable or treatable infectious diseases. In addition to this however, many developing countries are also experiencing an epidemiological shift and polarization in which populations are now experiencing more of the effects of chronic diseases as life expectancy increases, the poorer communities being heavily affected by both chronic and infectious diseases. Another major public health concern in the developing world is poor maternal and child health, exacerbated by malnutrition and poverty. The WHO reports that a lack of exclusive breastfeeding during the first six months of life contributes to over a million avoidable child deaths each year. Intermittent preventive therapy aimed at treating and preventing malaria episodes among pregnant women and young children is one public health measure in endemic countries.
Since the 1980s, the growing field of population health has broadened the focus of public health from individual behaviors and risk factors to population-level issues such as inequality, poverty, and education. Modern public health is often concerned with addressing determinants of health across a population. There is a recognition that health is affected by many factors including class, race, income, educational status, region of residence, and social relationships; these are known as "social determinants of health". The upstream drivers such as environment, education, employment, income, food security, housing, social inclusion and many others effect the distribution of health between and within populations and are often shaped by policy. A social gradient in health runs through society. The poorest generally have the worst health, but even the middle classes will generally have worse health outcomes than those of a higher social level. The new public health advocates for population-based policies that improve health in an equitable manner.
The health sector is one of Europe's most labor-intensive industries. In late 2020, it accounted for more than 21 million employment in the European Union when combined with social work. According to the WHO, several countries began the COVID-19 pandemic with insufficient health and care professionals, inappropriate skill mixtures, and unequal geographical distributions. These issues were worsened by the pandemic, reiterating the importance of public health. In the United States, a history of underinvestment in public health undermined the public health workforce and support for population health, long before the pandemic added to stress, mental distress, job dissatisfaction, and accelerated departures among public health workers.
Health aid in developing countries
Health aid to developing countries is an important source of public health funding for many developing countries. Health aid to developing countries has shown a significant increase after World War II as concerns over the spread of disease as a result of globalization increased and the HIV/AIDS epidemic in sub-Saharan Africa surfaced. From 1990 to 2010, total health aid from developed countries increased from 5.5 billion to 26.87 billion with wealthy countries continuously donating billions of dollars every year with the goal of improving population health. Some efforts, however, receive a significantly larger proportion of funds such as HIV which received an increase in funds of over $6 billion between 2000 and 2010 which was more than twice the increase seen in any other sector during those years. Health aid has seen an expansion through multiple channels including private philanthropy, non-governmental organizations, private foundations such as the Rockefeller Foundation or the Bill & Melinda Gates Foundation, bilateral donors, and multilateral donors such as the World Bank or UNICEF. The result has been a sharp rise in uncoordinated and fragmented funding of an ever-increasing number of initiatives and projects. To promote better strategic cooperation and coordination between partners, particularly among bilateral development agencies and funding organizations, the Swedish International Development Cooperation Agency (Sida) spearheaded the establishment of ESSENCE, an initiative to facilitate dialogue between donors/funders, allowing them to identify synergies. ESSENCE brings together a wide range of funding agencies to coordinate funding efforts.
In 2009 health aid from the OECD amounted to $12.47 billion which amounted to 11.4% of its total bilateral aid. In 2009, Multilateral donors were found to spend 15.3% of their total aid on bettering public healthcare.
International health aid debates
Debates exist questioning the efficacy of international health aid. Supporters of aid claim that health aid from wealthy countries is necessary in order for developing countries to escape the poverty trap. Opponents of health aid claim that international health aid actually disrupts developing countries' course of development, causes dependence on aid, and in many cases the aid fails to reach its recipients. For example, recently, health aid was funneled towards initiatives such as financing new technologies like antiretroviral medication, insecticide-treated mosquito nets, and new vaccines. The positive impacts of these initiatives can be seen in the eradication of smallpox and polio; however, critics claim that misuse or misplacement of funds may cause many of these efforts to never come into achievement.
Economic modeling based on the Institute for Health Metrics and Evaluation and the World Health Organization has shown a link between international health aid in developing countries and a reduction in adult mortality rates. However, a 2014–2016 study suggests that a potential confounding variable for this outcome is the possibility that aid was directed at countries once they were already on track for improvement. That same study, however, also suggests that 1 billion dollars in health aid was associated with 364,000 fewer deaths occurring between ages 0 and 5 in 2011.
Sustainable development goals for 2030
To address current and future challenges in addressing health issues in the world, the United Nations have developed the Sustainable Development Goals to be completed by 2030. These goals in their entirety encompass the entire spectrum of development across nations, however Goals 1–6 directly address health disparities, primarily in developing countries. These six goals address key issues in global public health, poverty, hunger and food security, health, education, gender equality and women's empowerment, and water and sanitation. Public health officials can use these goals to set their own agenda and plan for smaller scale initiatives for their organizations. These goals are designed to lessen the burden of disease and inequality faced by developing countries and lead to a healthier future. The links between the various sustainable development goals and public health are numerous and well established.
History
Until the 18th century
From the beginnings of human civilization, communities promoted health and fought disease at the population level. Definitions of health as well as methods to pursue it differed according to the medical, religious and natural-philosophical ideas groups held, the resources they had, and the changing circumstances in which they lived. Yet few early societies displayed the hygienic stagnation or even apathy often attributed to them. The latter reputation is mainly based on the absence of present-day bioindicators, especially immunological and statistical tools developed in light of the germ theory of disease transmission.
Public health was born neither in Europe nor as a response to the Industrial Revolution. Preventive health interventions are attested almost anywhere historical communities have left their mark. In Southeast Asia, for instance, Ayurvedic medicine and subsequently Buddhism fostered occupational, dietary and sexual regimens that promised balanced bodies, lives and communities, a notion strongly present in Traditional Chinese Medicine as well. Among the Mayans, Aztecs and other early civilizations in the Americas, population centers pursued hygienic programs, including by holding medicinal herbal markets. And among Aboriginal Australians, techniques for preserving and protecting water and food sources, micro-zoning to reduce pollution and fire risks, and screens to protect people against flies were common, even in temporary camps.
Western European, Byzantine and Islamicate civilizations, which generally adopted a Hippocratic, Galenic or humoral medical system, fostered preventive programs as well. These were developed on the basis of evaluating the quality of local climates, including topography, wind conditions and exposure to the sun, and the properties and availability of water and food, for both humans and nonhuman animals. Diverse authors of medical, architectural, engineering and military manuals explained how to apply such theories to groups of different origins and under different circumstances. This was crucial, since under Galenism bodily constitutions were thought to be heavily shaped by their material environments, so their balance required specific regimens as they traveled during different seasons and between climate zones.
In complex, pre-industrialized societies, interventions designed to reduce health risks could be the initiative of different stakeholders. For instance, in Greek and Roman antiquity, army generals learned to provide for soldiers' wellbeing, including off the battlefield, where most combatants died prior to the twentieth century. In Christian monasteries across the Eastern Mediterranean and western Europe since at least the fifth century CE, monks and nuns pursued strict but balanced regimens, including nutritious diets, developed explicitly to extend their lives. And royal, princely and papal courts, which were often mobile as well, likewise adapted their behavior to suit environmental conditions in the sites they occupied. They could also choose sites they considered salubrious for their members and sometimes had them modified.
In cities, residents and rulers developed measures to benefit the general population, which faced a broad array of recognized health risks. These provide some of the most sustained evidence for preventive measures in earlier civilizations. In numerous sites the upkeep of infrastructures, including roads, canals and marketplaces, as well as zoning policies, were introduced explicitly to preserve residents' health. Officials such as the muhtasib in the Middle East and the Road master in Italy, fought the combined threats of pollution through sin, ocular intromission and miasma. Craft guilds were important agents of waste disposal and promoted harm reduction through honesty and labor safety among their members. Medical practitioners, including public physicians, collaborated with urban governments in predicting and preparing for calamities and identifying and isolating people perceived as lepers, a disease with strong moral connotations. Neighborhoods were also active in safeguarding local people's health, by monitoring at-risk sites near them and taking appropriate social and legal action against artisanal polluters and neglectful owners of animals. Religious institutions, individuals and charitable organizations in both Islam and Christianity likewise promoted moral and physical wellbeing by endowing urban amenities such as wells, fountains, schools and bridges, also in the service of pilgrims. In western Europe and Byzantium, religious processions commonly took place, which purported to act as both preventive and curative measures for the entire community.
Urban residents and other groups also developed preventive measures in response to calamities such as war, famine, floods and widespread disease. During and after the Black Death (1346–53), for instance, inhabitants of the Eastern Mediterranean and Western Europe reacted to massive population decline in part on the basis of existing medical theories and protocols, for instance concerning meat consumption and burial, and in part by developing new ones. The latter included the establishment of quarantine facilities and health boards, some of which eventually became regular urban (and later national) offices. Subsequent measures for protecting cities and their regions included issuing health passports for travelers, deploying guards to create sanitary cordons for protecting local inhabitants, and gathering morbidity and mortality statistics. Such measures relied in turn on better transportation and communication networks, through which news on human and animal disease was efficiently spread.
After the 18th century
With the onset of the Industrial Revolution, living standards amongst the working population began to worsen, with cramped and unsanitary urban conditions. In the first four decades of the 19th century alone, London's population doubled and even greater growth rates were recorded in the new industrial towns, such as Leeds and Manchester. This rapid urbanization exacerbated the spread of disease in the large conurbations that built up around the workhouses and factories. These settlements were cramped and primitive with no organized sanitation. Disease was inevitable and its incubation in these areas was encouraged by the poor lifestyle of the inhabitants. Unavailable housing led to the rapid growth of slums and the per capita death rate began to rise alarmingly, almost doubling in Birmingham and Liverpool. Thomas Malthus warned of the dangers of overpopulation in 1798. His ideas, as well as those of Jeremy Bentham, became very influential in government circles in the early years of the 19th century. The latter part of the century brought the establishment of the basic pattern of improvements in public health over the next two centuries: a social evil was identified, private philanthropists brought attention to it, and changing public opinion led to government action. The 18th century saw rapid growth in voluntary hospitals in England.
The practice of vaccination began in the 1800s, following the pioneering work of Edward Jenner in treating smallpox. James Lind's discovery of the causes of scurvy amongst sailors and its mitigation via the introduction of fruit on lengthy voyages was published in 1754 and led to the adoption of this idea by the Royal Navy. Efforts were also made to promulgate health matters to the broader public; in 1752 the British physician Sir John Pringle published Observations on the Diseases of the Army in Camp and Garrison, in which he advocated for the importance of adequate ventilation in the military barracks and the provision of latrines for the soldiers.
Public health legislation in England
The first attempts at sanitary reform and the establishment of public health institutions were made in the 1840s. Thomas Southwood Smith, physician at the London Fever Hospital, began to write papers on the importance of public health, and was one of the first physicians brought in to give evidence before the Poor Law Commission in the 1830s, along with Neil Arnott and James Phillips Kay. Smith advised the government on the importance of quarantine and sanitary improvement for limiting the spread of infectious diseases such as cholera and yellow fever.
The Poor Law Commission reported in 1838 that "the expenditures necessary to the adoption and maintenance of measures of prevention would ultimately amount to less than the cost of the disease now constantly engendered". It recommended the implementation of large scale government engineering projects to alleviate the conditions that allowed for the propagation of disease. The Health of Towns Association was formed at Exeter Hall London on 11 December 1844, and vigorously campaigned for the development of public health in the United Kingdom. Its formation followed the 1843 establishment of the Health of Towns Commission, chaired by Sir Edwin Chadwick, which produced a series of reports on poor and insanitary conditions in British cities.
These national and local movements led to the Public Health Act, finally passed in 1848. It aimed to improve the sanitary condition of towns and populous places in England and Wales by placing the supply of water, sewerage, drainage, cleansing and paving under a single local body with the General Board of Health as a central authority. The Act was passed by the Liberal government of Lord John Russell, in response to the urging of Edwin Chadwick. Chadwick's seminal report on The Sanitary Condition of the Labouring Population was published in 1842 and was followed up with a supplementary report a year later. During this time, James Newlands (appointed following the passing of the 1846 Liverpool Sanatory Act championed by the Borough of Liverpool Health of Towns Committee) designed the world's first integrated sewerage system, in Liverpool (1848–1869), with Joseph Bazalgette later creating London's sewerage system (1858–1875).
The Vaccination Act 1853 introduced compulsory smallpox vaccination in England and Wales. By 1871 legislation required a comprehensive system of registration run by appointed vaccination officers.
Further interventions were made by a series of subsequent Public Health Acts, notably the 1875 Act. Reforms included the building of sewers, the regular collection of garbage followed by incineration or disposal in a landfill, the provision of clean water and the draining of standing water to prevent the breeding of mosquitoes.
The Infectious Disease (Notification) Act 1889 (52 & 53 Vict. c. 72) mandated the reporting of infectious diseases to the local sanitary authority, which could then pursue measures such as the removal of the patient to hospital and the disinfection of homes and properties.
Public health legislation in other countries
In the United States, the first public health organization based on a state health department and local boards of health was founded in New York City in 1866.
During The Weimar Republic, Germany faced many public health catastrophes. The Nazi Party had a goal of modernizing health care with Volksgesundheit, German for people's public health; this modernization was based on the growing field of eugenics and measures prioritizing group health over any care for the health of individuals. The end of World War 2 led to the Nuremberg Code, a set of research ethics concerning human experimentation.
Epidemiology
The science of epidemiology was founded by John Snow's identification of a polluted public water well as the source of an 1854 cholera outbreak in London. Snow believed in the germ theory of disease as opposed to the prevailing miasma theory. By talking to local residents (with the help of Reverend Henry Whitehead), he identified the source of the outbreak as the public water pump on Broad Street (now Broadwick Street). Although Snow's chemical and microscope examination of a water sample from the Broad Street pump did not conclusively prove its danger, his studies of the pattern of the disease were convincing enough to persuade the local council to close the well pump by removing its handle.
Snow later used a dot map to illustrate the cluster of cholera cases around the pump. He also used statistics to illustrate the connection between the quality of the water source and cholera cases. He showed that the Southwark and Vauxhall Waterworks Company was taking water from sewage-polluted sections of the Thames and delivering the water to homes, leading to an increased incidence of cholera. Snow's study was a major event in the history of public health and geography. It is regarded as the founding event of the science of epidemiology.
Control of infectious diseases
With the pioneering work in bacteriology of French chemist Louis Pasteur and German scientist Robert Koch, methods for isolating the bacteria responsible for a given disease and vaccines for remedy were developed at the turn of the 20th century. British physician Ronald Ross identified the mosquito as the carrier of malaria and laid the foundations for combating the disease. Joseph Lister revolutionized surgery by the introduction of antiseptic surgery to eliminate infection. French epidemiologist Paul-Louis Simond proved that plague was carried by fleas on the back of rats, and Cuban scientist Carlos J. Finlay and U.S. Americans Walter Reed and James Carroll demonstrated that mosquitoes carry the virus responsible for yellow fever. Brazilian scientist Carlos Chagas identified a tropical disease and its vector.
Society and culture
Education and training
Education and training of public health professionals is available throughout the world in Schools of Public Health, Medical Schools, Veterinary Schools, Schools of Nursing, and Schools of Public Affairs. The training typically requires a university degree with a focus on core disciplines of biostatistics, epidemiology, health services administration, health policy, health education, behavioral science, gender issues, sexual and reproductive health, public health nutrition, and occupational and environmental health.
In the global context, the field of public health education has evolved enormously in recent decades, supported by institutions such as the World Health Organization and the World Bank, among others. Operational structures are formulated by strategic principles, with educational and career pathways guided by competency frameworks, all requiring modulation according to local, national and global realities. Moreover, integrating technology or digital platforms to connect to low health literacy LHL groups could be a way to increase health literacy. It is critically important for the health of populations that nations assess their public health human resource needs and develop their ability to deliver this capacity, and not depend on other countries to supply it.
Schools of public health: a US perspective
In the United States, the Welch-Rose Report of 1915 has been viewed as the basis for the critical movement in the history of the institutional schism between public health and medicine because it led to the establishment of schools of public health supported by the Rockefeller Foundation. The report was authored by William Welch, founding dean of the Johns Hopkins Bloomberg School of Public Health, and Wickliffe Rose of the Rockefeller Foundation. The report focused more on research than practical education. Some have blamed the Rockefeller Foundation's 1916 decision to support the establishment of schools of public health for creating the schism between public health and medicine and legitimizing the rift between medicine's laboratory investigation of the mechanisms of disease and public health's nonclinical concern with environmental and social influences on health and wellness.
Even though schools of public health had already been established in Canada, Europe and North Africa, the United States had still maintained the traditional system of housing faculties of public health within their medical institutions. A $25,000 donation from businessman Samuel Zemurray instituted the School of Public Health and Tropical Medicine at Tulane University in 1912 conferring its first doctor of public health degree in 1914. The Yale School of Public Health was founded by Charles-Edward Amory Winslow in 1915. The Johns Hopkins School of Hygiene and Public Health was founded in 1916 and became an independent, degree-granting institution for research and training in public health, and the largest public health training facility in the United States. By 1922, schools of public health were established at Columbia and Harvard on the Hopkins model. By 1999 there were twenty nine schools of public health in the US, enrolling around fifteen thousand students.
Over the years, the types of students and training provided have also changed. In the beginning, students who enrolled in public health schools typically had already obtained a medical degree; public health school training was largely a second degree for medical professionals. However, in 1978, 69% of American students enrolled in public health schools had only a bachelor's degree.
Degrees in public health
Schools of public health offer a variety of degrees generally fall into two categories: professional or academic. The two major postgraduate degrees are the Master of Public Health (MPH) or the Master of Science in Public Health (MSPH). Doctoral studies in this field include Doctor of Public Health (DrPH) and Doctor of Philosophy (PhD) in a subspecialty of greater Public Health disciplines. DrPH is regarded as a professional degree and PhD as more of an academic degree.
Professional degrees are oriented towards practice in public health settings. The Master of Public Health, Doctor of Public Health, Doctor of Health Science (DHSc/DHS) and the Master of Health Care Administration are examples of degrees which are geared towards people who want careers as practitioners of public health in health departments, managed care and community-based organizations, hospitals and consulting firms, among others. Master of Public Health degrees broadly fall into two categories, those that put more emphasis on an understanding of epidemiology and statistics as the scientific basis of public health practice and those that include a more wide range of methodologies. A Master of Science of Public Health is similar to an MPH but is considered an academic degree (as opposed to a professional degree) and places more emphasis on scientific methods and research. The same distinction can be made between the DrPH and the DHSc: The DrPH is considered a professional degree and the DHSc is an academic degree.
Academic degrees are more oriented towards those with interests in the scientific basis of public health and preventive medicine who wish to pursue careers in research, university teaching in graduate programs, policy analysis and development, and other high-level public health positions. Examples of academic degrees are the Master of Science, Doctor of Philosophy, Doctor of Science (ScD), and Doctor of Health Science (DHSc). The doctoral programs are distinct from the MPH and other professional programs by the addition of advanced coursework and the nature and scope of a dissertation research project.
Notable people
John Graunt (1620–1674) was a British citizen scientist who laid the foundations for epidemiology.
Edward Jenner (1749–1823) created the smallpox vaccine, the first vaccine in the world. He is often known as "the father of immunology".
Benjamin Waterhouse (1753–1846) introduced the smallpox vaccine in the United States.
Lemuel Shattuck (1793–1859) has been described as an "architect" and "prophet" of American public health
John Snow (1813–1858) was 'the father of modern epidemiology'.
Sir Joseph William Bazalgette (1819–1891) created a sewer network for central London in response to the Great Stink of 1858. This proved instrumental in relieving the city from cholera epidemics.
Louis Pasteur (1822–1895) conducted research that laid the foundation for our understanding of the causes and preventions of diseases.
Robert Koch (1843–1910) used his discoveries to establish that germs "could cause a specific disease"and directly provided proofs for that germ theory of diseases, therefore creating the scientific basis of public health, saving millions of lives.
Charles V. Chapin (1856–1941) public health advocate and researcher credited with planting "the roots of quality in public health" in the United States
Sara Josephine Baker (1873–1945) was an "instrumental force in child and maternal health"
Nora Wattie (1900–1994) led the development of public health services and sanitation, and education in improving women and child health in the poorest slums of Glasgow, for which she received the OBE.
Jonas Salk (1914–1995) developed one of the first polio vaccines and campaigned vigorously for mandatory vaccinations.
Ruth Huenemann (1910–2005) She became a pioneer in the study of childhood obesity in the 1960s studying the diet and exercise habits of Berkeley teenagers.
Edmond Fernandes ( 1990-) Demonstrated proof of concept to end the burden of Malnutrition in India and around the world.
Dilip Mahalanabis - Credited to have evolved and utilized ORS to save thousands of lives during the liberation war.
Country examples
Canada
In Canada, the Public Health Agency of Canada is the national agency responsible for public health, emergency preparedness and response, and infectious and chronic disease control and prevention.
Cuba
Since the 1959 Cuban Revolution, the Cuban government has devoted extensive resources to the improvement of health conditions for its entire population via universal access to health care. Infant mortality has plummeted. Cuban medical internationalism as a policy has seen the Cuban government sent doctors as a form of aid and export to countries in need in Latin America, especially Venezuela, as well as Oceania and Africa countries.
Colombia and Bolivia
Public health was important elsewhere in Latin America in consolidating state power and integrating marginalized populations into the nation-state. In Colombia, public health was a means for creating and implementing ideas of citizenship. In Bolivia, a similar push came after their 1952 revolution.
Ghana
Though curable and preventive, malaria remains a major public health issue and is the third leading cause of death in Ghana. In the absence of a vaccine, mosquito control, or access to anti-malaria medication, public health methods become the main strategy for reducing the prevalence and severity of malaria. These methods include reducing breeding sites, screening doors and windows, insecticide sprays, prompt treatment following infection, and usage of insecticide treated mosquito nets. Distribution and sale of insecticide-treated mosquito nets is a common, cost-effective anti-malaria public health intervention; however, barriers to use exist including cost, household and family organization, access to resources, and social and behavioral determinants which have not only been shown to affect malaria prevalence rates but also mosquito net use.
France
Mexico
United States
The United States lacks a coherent system for the governmental funding of public health, relying on a variety of agencies and programs at the federal, state and local levels.
Between 1960 and 2001, public health spending in the United States tended to grow,
based on increasing expenditures by state and local government, which made up 80–90% of
total public health spending. Spending in support of public health in the United States peaked in 2002 and declined in the following decade. State cuts to public health funding during the Great Recession of 2007–2008 were not restored in subsequent years.
As of 2012, a panel for the U.S. Institute of Medicine panel warned that the United States spends disproportionately far more on clinical care than it does on public health, neglecting "population-based activities that offer efficient and effective approaches to improving the nation's health." , about 3% of government health spending was directed to public health and prevention. This situation has been described as an "uneven patchwork" and "chronic underfunding".
The COVID-19 pandemic has been seen as drawing attention to problems in the public health system in the United States and to a lack of understanding of public health and its important role as a common good.
| Biology and health sciences | Fields of medicine | null |
463835 | https://en.wikipedia.org/wiki/Life%20on%20Mars | Life on Mars | The possibility of life on Mars is a subject of interest in astrobiology due to the planet's proximity and similarities to Earth. To date, no conclusive evidence of past or present life has been found on Mars. Cumulative evidence suggests that during the ancient Noachian time period, the surface environment of Mars had liquid water and may have been habitable for microorganisms, but habitable conditions do not necessarily indicate life.
Scientific searches for evidence of life began in the 19th century and continue today via telescopic investigations and deployed probes, searching for water, chemical biosignatures in the soil and rocks at the planet's surface, and biomarker gases in the atmosphere.
Mars is of particular interest for the study of the origins of life because of its similarity to the early Earth. This is especially true since Mars has a cold climate and lacks plate tectonics or continental drift, so it has remained almost unchanged since the end of the Hesperian period. At least two-thirds of Mars' surface is more than 3.5 billion years old, and it could have been habitable 4.48 billion years ago, 500 million years before the earliest known Earth lifeforms; Mars may thus hold the best record of the prebiotic conditions leading to life, even if life does not or has never existed there.
Following the confirmation of the past existence of surface liquid water, the Curiosity, Perseverance and Opportunity rovers started searching for evidence of past life, including a past biosphere based on autotrophic, chemotrophic, or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, fossils, and organic compounds on Mars is now a primary objective for space agencies.
The discovery of organic compounds inside sedimentary rocks and of boron on Mars are of interest as they are precursors for prebiotic chemistry. Such findings, along with previous discoveries that liquid water was clearly present on ancient Mars, further supports the possible early habitability of Gale Crater on Mars. Currently, the surface of Mars is bathed with ionizing radiation, and Martian soil is rich in perchlorates toxic to microorganisms. Therefore, the consensus is that if life exists—or existed—on Mars, it could be found or is best preserved in the subsurface, away from present-day harsh surface processes.
In June 2018, NASA announced the detection of seasonal variation of methane levels on Mars. Methane could be produced by microorganisms or by geological means. The European ExoMars Trace Gas Orbiter started mapping the atmospheric methane in April 2018, and the 2022 ExoMars rover Rosalind Franklin was planned to drill and analyze subsurface samples before the programme's indefinite suspension, while the NASA Mars 2020 rover Perseverance, having landed successfully, will cache dozens of drill samples for their potential transport to Earth laboratories in the late 2020s or 2030s. As of February 8, 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. In October 2024, NASA announced that it may be possible for photosynthesis to occur within dusty water ice exposed in the mid-latitude regions of Mars.
Early speculation
Mars's polar ice caps were discovered in the mid-17th century. In the late 18th century, William Herschel proved they grow and shrink alternately, in the summer and winter of each hemisphere. By the mid-19th century, astronomers knew that Mars had certain other similarities to Earth, for example that the length of a day on Mars was almost the same as a day on Earth. They also knew that its axial tilt was similar to Earth's, which meant it experienced seasons just as Earth does—but of nearly double the length owing to its much longer year. These observations led to increasing speculation that the darker albedo features were water and the brighter ones were land, whence followed speculation on whether Mars may be inhabited by some form of life.
In 1854, William Whewell, a fellow of Trinity College, Cambridge, theorized that Mars had seas, land and possibly life forms. Speculation about life on Mars exploded in the late 19th century, following telescopic observation by some observers of apparent Martian canals—which were later found to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilization. This idea led British writer H. G. Wells to write The War of the Worlds in 1897, telling of an invasion by aliens from Mars who were fleeing the planet's desiccation.
The 1907 book Is Mars Habitable? by British naturalist Alfred Russel Wallace was a reply to, and refutation of, Lowell's Mars and Its Canals. Wallace's book concluded that Mars "is not only uninhabited by intelligent beings such as Mr. Lowell postulates, but is absolutely uninhabitable." Historian Charles H. Smith refers to Wallace's book as one of the first works in the field of astrobiology.
Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen were present in the Martian atmosphere. The influential observer Eugène Antoniadi used the 83-cm (32.6 inch) aperture telescope at Meudon Observatory at the 1909 opposition of Mars and saw no canals, the outstanding photos of Mars taken at the new Baillaud dome at the Pic du Midi observatory also brought formal discredit to the Martian canals theory in 1909, and the notion of canals began to fall out of favor.
Habitability
Chemical, physical, geological, and geographic attributes shape the environments on Mars. Isolated measurements of these factors may be insufficient to deem an environment habitable, but the sum of measurements can help predict locations with greater or lesser habitability potential. The two current ecological approaches for predicting the potential habitability of the Martian surface use 19 or 20 environmental factors, with an emphasis on water availability, temperature, the presence of nutrients, an energy source, and protection from solar ultraviolet and galactic cosmic radiation.
Scientists do not know the minimum number of parameters for determination of habitability potential, but they are certain it is greater than one or two of the factors in the table below. Similarly, for each group of parameters, the habitability threshold for each is to be determined. Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly. There are no full-Mars simulations published yet that include all of the biocidal factors combined. Furthermore, the possibility of Martian life having a far different biochemistry and habitability requirements than the terrestrial biosphere is an open question. A common hypothesis is methanogenic Martian life, and while such organisms exist on Earth too, they are exceptionally rare and cannot survive in the majority of terrestrial environments that contain oxygen.
Past
Recent models have shown that, even with a dense CO2 atmosphere, early Mars was colder than Earth has ever been. Transiently warm conditions related to impacts or volcanism could have produced conditions favoring the formation of the late Noachian valley networks, even though the mid-late Noachian global conditions were probably icy. Local warming of the environment by volcanism and impacts would have been sporadic, but there should have been many events of water flowing at the surface of Mars. Both the mineralogical and the morphological evidence indicates a degradation of habitability from the mid Hesperian onward. The exact causes are not well understood but may be related to a combination of processes including loss of early atmosphere, or impact erosion, or both. Billions of years ago, before this degradation, the surface of Mars was apparently fairly habitable, consisted of liquid water and clement weather, though it is unknown if life existed on Mars.
The loss of the Martian magnetic field strongly affected surface environments through atmospheric loss and increased radiation; this change significantly degraded surface habitability. When there was a magnetic field, the atmosphere would have been protected from erosion by the solar wind, which would ensure the maintenance of a dense atmosphere, necessary for liquid water to exist on the surface of Mars. The loss of the atmosphere was accompanied by decreasing temperatures. Part of the liquid water inventory sublimed and was transported to the poles, while the rest became
trapped in permafrost, a subsurface ice layer.
Observations on Earth and numerical modeling have shown that a crater-forming impact can result in the creation of a long-lasting hydrothermal system when ice is present in the crust. For example, a 130 km large crater could sustain an active hydrothermal system for up to 2 million years, that is, long enough for microscopic life to emerge, but unlikely to have progressed any further down the evolutionary path.
Soil and rock samples studied in 2013 by NASA's Curiosity rover's onboard instruments brought about additional information on several habitability factors. The rover team identified some of the key chemical ingredients for life in this soil, including sulfur, nitrogen, hydrogen, oxygen, phosphorus and possibly carbon, as well as clay minerals, suggesting a long-ago aqueous environment—perhaps a lake or an ancient streambed—that had neutral acidity and low salinity. On December 9, 2013, NASA reported that, based on evidence from Curiosity studying Aeolis Palus, Gale Crater contained an ancient freshwater lake which could have been a hospitable environment for microbial life. The confirmation that liquid water once flowed on Mars, the existence of nutrients, and the previous discovery of a past magnetic field that protected the planet from cosmic and solar radiation, together strongly suggest that Mars could have had the environmental factors to support life. The assessment of past habitability is not in itself evidence that Martian life has ever actually existed. If it did, it was probably microbial, existing communally in fluids or on sediments, either free-living or as biofilms, respectively. The exploration of terrestrial analogues provide clues as to how and where best look for signs of life on Mars.
Impactite, shown to preserve signs of life on Earth, was discovered on Mars and could contain signs of ancient life, if life ever existed on the planet.
On June 7, 2018, NASA announced that the Curiosity rover had discovered organic molecules in sedimentary rocks dating to three billion years old. The detection of organic molecules in rocks indicate that some of the building blocks for life were present.
Research into how the conditions for habitability ended is ongoing. On October 7, 2024, NASA announced that the results of the previous three years of sampling onboard Curiosity suggested that based on high carbon-13 and oxygen-18 levels in the regolith, the early Martian atmosphere was less likely than previously thought, to be stable enough to support surface water hospitable to life, with rapid wetting-drying cycles and very high-salinity cryogenic brines providing potential explanations.
Present
Conceivably, if life exists (or existed) on Mars, evidence of life could be found, or is best preserved, in the subsurface, away from present-day harsh surface conditions. Present-day life on Mars, or its biosignatures, could occur kilometers below the surface, or in subsurface geothermal hot spots, or it could occur a few meters below the surface. The permafrost layer on Mars is only a couple of centimeters below the surface, and salty brines can be liquid a few centimeters below that but not far down. Water is close to its boiling point even at the deepest points in the Hellas basin, and so cannot remain liquid for long on the surface of Mars in its present state, except after a sudden release of underground water.
So far, NASA has pursued a "follow the water" strategy on Mars and has not searched for biosignatures for life there directly since the Viking missions. The consensus by astrobiologists is that it may be necessary to access the Martian subsurface to find currently habitable environments.
Cosmic radiation
In 1965, the Mariner 4 probe discovered that Mars had no global magnetic field that would protect the planet from potentially life-threatening cosmic radiation and solar radiation; observations made in the late 1990s by the Mars Global Surveyor confirmed this discovery. Scientists speculate that the lack of magnetic shielding helped the solar wind blow away much of Mars's atmosphere over the course of several billion years. As a result, the planet has been vulnerable to radiation from space for about 4 billion years.
Recent in-situ data from Curiosity rover indicates that ionizing radiation from galactic cosmic rays (GCR) and solar particle events (SPE) may not be a limiting factor in habitability assessments for present-day surface life on Mars. The level of 76 mGy per year measured by Curiosity is similar to levels inside the ISS.
Cumulative effects
Curiosity rover measured ionizing radiation levels of 76 mGy per year. This level of ionizing radiation is sterilizing for dormant life on the surface of Mars. It varies considerably in habitability depending on its orbital eccentricity and the tilt of its axis. If the surface life has been reanimated as recently as 450,000 years ago, then rovers on Mars could find dormant but still viable life at a depth of one meter below the surface, according to an estimate. Even the hardiest cells known could not possibly survive the cosmic radiation near the surface of Mars since Mars lost its protective magnetosphere and atmosphere. After mapping cosmic radiation levels at various depths on Mars, researchers have concluded that over time, any life within the first several meters of the planet's surface would be killed by lethal doses of cosmic radiation. The team calculated that the cumulative damage to DNA and RNA by cosmic radiation would limit retrieving viable dormant cells on Mars to depths greater than 7.5 meters below the planet's surface.
Even the most radiation-tolerant terrestrial bacteria would survive in dormant spore state only 18,000 years at the surface; at 2 meters—the greatest depth at which the ExoMars rover will be capable of reaching—survival time would be 90,000 to half a million years, depending on the type of rock.
Data collected by the Radiation assessment detector (RAD) instrument on board the Curiosity rover revealed that the absorbed dose measured is 76 mGy/year at the surface, and that "ionizing radiation strongly influences chemical compositions and structures, especially for water, salts, and redox-sensitive components such as organic molecules." Regardless of the source of Martian organic compounds (meteoric, geological, or biological), its carbon bonds are susceptible to breaking and reconfiguring with surrounding elements by ionizing charged particle radiation. These improved subsurface radiation estimates give insight into the potential for the preservation of possible organic biosignatures as a function of depth as well as survival times of possible microbial or bacterial life forms left dormant beneath the surface. The report concludes that the in situ "surface measurements—and subsurface estimates—constrain the preservation window for Martian organic matter following exhumation and exposure to ionizing radiation in the top few meters of the Martian surface."
In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled and were associated with an aurora 25 times brighter than any observed earlier, due to a major, and unexpected, solar storm in the middle of the month.
UV radiation
On UV radiation, a 2014 report concludes that "[T]he Martian UV radiation environment is rapidly lethal to unshielded microbes but can be attenuated by global dust storms and shielded completely by < 1 mm of regolith or by other organisms." In addition, laboratory research published in July 2017 demonstrated that UV irradiated perchlorates cause a 10.8-fold increase in cell death when compared to cells exposed to UV radiation after 60 seconds of exposure. The penetration depth of UV radiation into soils is in the sub-millimeter to millimeter range and depends on the properties of the soil. A recent study found that photosynthesis could occur within dusty ice exposed in the Martian mid-latitudes because the overlying dusty ice blocks the harmful ultraviolet radiation at Mars’ surface.
Perchlorates
The Martian regolith is known to contain a maximum of 0.5% (w/v) perchlorate (ClO4−) that is toxic for most living organisms, but since they drastically lower the freezing point of water and a few extremophiles can use it as an energy source (see Perchlorates - Biology) and grow at concentrations of up to 30% (w/v) sodium perchlorate by physiologically adapting to increasing perchlorate concentrations, it has prompted speculation of what their influence would be on habitability.
Research published in July 2017 shows that when irradiated with a simulated Martian UV flux, perchlorates become even more lethal to bacteria (bactericide). Even dormant spores lost viability within minutes. In addition, two other compounds of the Martian surface, iron oxides and hydrogen peroxide, act in synergy with irradiated perchlorates to cause a 10.8-fold increase in cell death when compared to cells exposed to UV radiation after 60 seconds of exposure. It was also found that abraded silicates (quartz and basalt) lead to the formation of toxic reactive oxygen species. The researchers concluded that "the surface of Mars is lethal to vegetative cells and renders much of the surface and near-surface regions uninhabitable." This research demonstrates that the present-day surface is more uninhabitable than previously thought, and reinforces the notion to inspect at least a few meters into the ground to ensure the levels of radiation would be relatively low.
However, researcher Kennda Lynch discovered the first-known instance of a habitat containing perchlorates and perchlorates-reducing bacteria in an analog environment: a paleolake in Pilot Valley, Great Salt Lake Desert, Utah, United States. She has been studying the biosignatures of these microbes, and is hoping that the Mars Perseverance rover will find matching biosignatures at its Jezero Crater site.
Recurrent slope lineae
Recurrent slope lineae (RSL) features form on Sun-facing slopes at times of the year when the local temperatures reach above the melting point for ice. The streaks grow in spring, widen in late summer and then fade away in autumn. This is hard to model in any other way except as involving liquid water in some form, though the streaks themselves are thought to be a secondary effect and not a direct indication of the dampness of the regolith. Although these features are now confirmed to involve liquid water in some form, the water could be either too cold or too salty for life. At present they are treated as potentially habitable, as "Uncertain Regions, to be treated as Special Regions".). They were suspected as involving flowing brines back then.
The thermodynamic availability of water (water activity) strictly limits microbial propagation on Earth, particularly in hypersaline environments, and there are indications that the brine ionic strength is a barrier to the habitability of Mars. Experiments show that high ionic strength, driven to extremes on Mars by the ubiquitous occurrence of divalent ions, "renders these environments uninhabitable despite the presence of biologically available water."
Nitrogen fixation
After carbon, nitrogen is arguably the most important element needed for life. Thus, measurements of nitrate over the range of 0.1% to 5% are required to address the question of its occurrence and distribution. There is nitrogen (as N2) in the atmosphere at low levels, but this is not adequate to support nitrogen fixation for biological incorporation. Nitrogen in the form of nitrate could be a resource for human exploration both as a nutrient for plant growth and for use in chemical processes. On Earth, nitrates correlate with perchlorates in desert environments, and this may also be true on Mars. Nitrate is expected to be stable on Mars and to have formed by thermal shock from impact or volcanic plume lightning on ancient Mars.
On March 24, 2015, NASA reported that the SAM instrument on the Curiosity rover detected nitrates by heating surface sediments. The nitrogen in nitrate is in a "fixed" state, meaning that it is in an oxidized form that can be used by living organisms. The discovery supports the notion that ancient Mars may have been hospitable for life. It is suspected that all nitrate on Mars is a relic, with no modern contribution. Nitrate abundance ranges from non-detection to 681 ± 304 mg/kg in the samples examined until late 2017. Modeling indicates that the transient condensed water films on the surface should be transported to lower depths (≈10 m) potentially transporting nitrates, where subsurface microorganisms could thrive.
In contrast, phosphate, one of the chemical nutrients thought to be essential for life, is readily available on Mars.
Low pressure
Further complicating estimates of the habitability of the Martian surface is the fact that very little is known about the growth of microorganisms at pressures close to those on the surface of Mars. Some teams determined that some bacteria may be capable of cellular replication down to 25 mbar, but that is still above the atmospheric pressures found on Mars (range 1–14 mbar). In another study, twenty-six strains of bacteria were chosen based on their recovery from spacecraft assembly facilities, and only Serratia liquefaciens strain ATCC 27592 exhibited growth at 7 mbar, 0 °C, and CO2-enriched anoxic atmospheres.
Liquid water
Liquid water is a necessary but not sufficient condition for life as humans know it, as habitability is a function of a multitude of environmental parameters. Liquid water cannot exist on the surface of Mars except at the lowest elevations for minutes or hours. Liquid water does not appear at the surface itself, but it could form in minuscule amounts around dust particles in snow heated by the Sun. Also, the ancient equatorial ice sheets beneath the ground may slowly sublimate or melt, accessible from the surface via caves.
Water on Mars exists almost exclusively as water ice, located in the Martian polar ice caps and under the shallow Martian surface even at more temperate latitudes. A small amount of water vapor is present in the atmosphere. There are no bodies of liquid water on the Martian surface because the water vapor pressure is less than 1 Pa, the atmospheric pressure at the surface averages —about 0.6% of Earth's mean sea level pressure—and because the temperature is far too low, () leading to immediate freezing. Despite this, about 3.8 billion years ago, there was a denser atmosphere, higher temperature, and vast amounts of liquid water flowed on the surface, including large oceans.
It has been estimated that the primordial oceans on Mars would have covered between 36% and 75% of the planet. On November 22, 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region of Mars. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior.
Analysis of Martian sandstones, using data obtained from orbital spectrometry, suggests that the waters that previously existed on the surface of Mars would have had too high a salinity to support most Earth-like life. Tosca et al. found that the Martian water in the locations they studied all had water activity, aw ≤ 0.78 to 0.86—a level fatal to most Terrestrial life. Haloarchaea, however, are able to live in hypersaline solutions, up to the saturation point.
In June 2000, possible evidence for current liquid water flowing at the surface of Mars was discovered in the form of flood-like gullies. Additional similar images were published in 2006, taken by the Mars Global Surveyor, that suggested that water occasionally flows on the surface of Mars. The images showed changes in steep crater walls and sediment deposits, providing the strongest evidence yet that water coursed through them as recently as several years ago.
There is disagreement in the scientific community as to whether or not the recent gully streaks were formed by liquid water. Some suggest the flows were merely dry sand flows. Others suggest it may be liquid brine near the surface, but the exact source of the water and the mechanism behind its motion are not understood.
In July 2018, scientists reported the discovery of a subglacial lake on Mars, below the southern polar ice cap, and extending sideways about , the first known stable body of water on the planet. The lake was discovered using the MARSIS radar on board the Mars Express orbiter, and the profiles were collected between May 2012 and December 2015. The lake is centered at 193°E, 81°S, a flat area that does not exhibit any peculiar topographic characteristics but is surrounded by higher ground, except on its eastern side, where there is a depression. However, subsequent studies disagree on whether any liquid can be present at this depth without anomalous heating from the interior of the planet. Instead, some studies propose that other factors may have led to radar signals resembling those containing liquid water, such as clays, or interference between layers of ice and dust.
Silica
In May 2007, the Spirit rover disturbed a patch of ground with its inoperative wheel, uncovering an area 90% rich in silica. The feature is reminiscent of the effect of hot spring water or steam coming into contact with volcanic rocks. Scientists consider this as evidence of a past environment that may have been favorable for microbial life and theorize that one possible origin for the silica may have been produced by the interaction of soil with acid vapors produced by volcanic activity in the presence of water.
Based on Earth analogs, hydrothermal systems on Mars would be highly attractive for their potential for preserving organic and inorganic biosignatures. For this reason, hydrothermal deposits are regarded as important targets in the exploration for fossil evidence of ancient Martian life.
Possible biosignatures
In May 2017, evidence of the earliest known life on land on Earth may have been found in 3.48-billion-year-old geyserite and other related mineral deposits (often found around hot springs and geysers) uncovered in the Pilbara Craton of Western Australia. These findings may be helpful in deciding where best to search for early signs of life on the planet Mars.
Methane
Methane (CH4) is chemically unstable in the current oxidizing atmosphere of Mars. It would quickly break down due to ultraviolet radiation from the Sun and chemical reactions with other gases. Therefore, a persistent presence of methane in the atmosphere may imply the existence of a source to continually replenish the gas.
Trace amounts of methane, at the level of several parts per billion (ppb), were first reported in Mars's atmosphere by a team at the NASA Goddard Space Flight Center in 2003. Large differences in the abundances were measured between observations taken in 2003 and 2006, which suggested that the methane was locally concentrated and probably seasonal. On June 7, 2018, NASA announced it has detected a seasonal variation of methane levels on Mars.
The ExoMars Trace Gas Orbiter (TGO), launched in March 2016, began on April 21, 2018, to map the concentration and sources of methane in the atmosphere, as well as its decomposition products such as formaldehyde and methanol. As of May 2019, the Trace Gas Orbiter showed that the concentration of methane is under detectable level (< 0.05 ppbv).
The principal candidates for the origin of Mars's methane include non-biological processes such as water-rock reactions, radiolysis of water, and pyrite formation, all of which produce H2 that could then generate methane and other hydrocarbons via Fischer–Tropsch synthesis with CO and CO2. It has also been shown that methane could be produced by a process involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars. Although geologic sources of methane such as serpentinization are possible, the lack of current volcanism, hydrothermal activity or hotspots are not favorable for geologic methane.
Living microorganisms, such as methanogens, are another possible source, but no evidence for the presence of such organisms has been found on Mars, until June 2019 as methane was detected by the Curiosity rover. Methanogens do not require oxygen or organic nutrients, are non-photosynthetic, use hydrogen as their energy source and carbon dioxide (CO2) as their carbon source, so they could exist in subsurface environments on Mars. If microscopic Martian life is producing the methane, it probably resides far below the surface, where it is still warm enough for liquid water to exist.
Since the 2003 discovery of methane in the atmosphere, some scientists have been designing models and in vitro experiments testing the growth of methanogenic bacteria on simulated Martian soil, where all four methanogen strains tested produced substantial levels of methane, even in the presence of 1.0wt% perchlorate salt.
A team led by Levin suggested that both phenomena—methane production and degradation—could be accounted for by an ecology of methane-producing and methane-consuming microorganisms.
Research at the University of Arkansas presented in June 2015 suggested that some methanogens could survive in Mars's low pressure. Rebecca Mickol found that in her laboratory, four species of methanogens survived low-pressure conditions that were similar to a subsurface liquid aquifer on Mars. The four species that she tested were Methanothermobacter wolfeii, Methanosarcina barkeri, Methanobacterium formicicum, and Methanococcus maripaludis. In June 2012, scientists reported that measuring the ratio of hydrogen and methane levels on Mars may help determine the likelihood of life on Mars. According to the scientists, "low H2/CH4 ratios (less than approximately 40)" would "indicate that life is likely present and active". The observed ratios in the lower Martian atmosphere were "approximately 10 times" higher "suggesting that biological processes may not be responsible for the observed CH4". The scientists suggested measuring the H2 and CH4 flux at the Martian surface for a more accurate assessment. Other scientists have recently reported methods of detecting hydrogen and methane in extraterrestrial atmospheres.
Even if rover missions determine that microscopic Martian life is the seasonal source of the methane, the life forms probably reside far below the surface, outside of the rover's reach.
Formaldehyde
In February 2005, it was announced that the Planetary Fourier Spectrometer (PFS) on the European Space Agency's Mars Express Orbiter had detected traces of formaldehyde in the atmosphere of Mars. Vittorio Formisano, the director of the PFS, has speculated that the formaldehyde could be the byproduct of the oxidation of methane and, according to him, would provide evidence that Mars is either extremely geologically active or harboring colonies of microbial life. NASA scientists consider the preliminary findings well worth a follow-up but have also rejected the claims of life.
Viking lander biological experiments
The 1970s Viking program placed two identical landers on the surface of Mars tasked to look for biosignatures of microbial life on the surface. The 'Labeled Release' (LR) experiment gave a positive result for metabolism, while the gas chromatograph–mass spectrometer did not detect organic compounds. The LR was a specific experiment designed to test only a narrowly defined critical aspect of the theory concerning the possibility of life on Mars; therefore, the overall results were declared inconclusive. No Mars lander mission has found meaningful traces of biomolecules or biosignatures. The claim of extant microbial life on Mars is based on old data collected by the Viking landers, currently reinterpreted as sufficient evidence of life, mainly by Gilbert Levin, Joseph D. Miller, Navarro, Giorgio Bianciardi and Patricia Ann Straat.
Assessments published in December 2010 by Rafael Navarro-Gonzáles indicate that organic compounds "could have been present" in the soil analyzed by both Viking 1 and 2. The study determined that perchlorate—discovered in 2008 by Phoenix lander—can destroy organic compounds when heated, and produce chloromethane and dichloromethane as a byproduct, the identical chlorine compounds discovered by both Viking landers when they performed the same tests on Mars. Because perchlorate would have broken down any Martian organics, the question of whether or not Viking found organic compounds is still wide open.
The Labeled Release evidence was not generally accepted initially, and, to this day lacks the consensus of the scientific community.
Meteorites
As of 2018, there are 224 known Martian meteorites (some of which were found in several fragments). These are valuable because they are the only physical samples of Mars available to Earth-bound laboratories. Some researchers have argued that microscopic morphological features found in ALH84001 are biomorphs, however this interpretation has been highly controversial and is not supported by the majority of researchers in the field.
Seven criteria have been established for the recognition of past life within terrestrial geologic samples. Those criteria are:
Is the geologic context of the sample compatible with past life?
Is the age of the sample and its stratigraphic location compatible with possible life?
Does the sample contain evidence of cellular morphology and colonies?
Is there any evidence of biominerals showing chemical or mineral disequilibria?
Is there any evidence of stable isotope patterns unique to biology?
Are there any organic biomarkers present?
Are the features indigenous to the sample?
For general acceptance of past life in a geologic sample, essentially most or all of these criteria must be met. All seven criteria have not yet been met for any of the Martian samples.
ALH84001
In 1996, the Martian meteorite ALH84001, a specimen that is much older than the majority of Martian meteorites that have been recovered so far, received considerable attention when a group of NASA scientists led by David S. McKay reported microscopic features and geochemical anomalies that they considered to be best explained by the rock having hosted Martian bacteria in the distant past. Some of these features resembled terrestrial bacteria, aside from their being much smaller than any known form of life. Much controversy arose over this claim, and ultimately all of the evidence McKay's team cited as evidence of life was found to be explainable by non-biological processes. Although the scientific community has largely rejected the claim ALH 84001 contains evidence of ancient Martian life, the controversy associated with it is now seen as a historically significant moment in the development of exobiology.
Nakhla
The Nakhla meteorite fell on Earth on June 28, 1911, on the locality of Nakhla, Alexandria, Egypt.
In 1998, a team from NASA's Johnson Space Center obtained a small sample for analysis. Researchers found preterrestrial aqueous alteration phases and objects of the size and shape consistent with Earthly fossilized nanobacteria.
Analysis with gas chromatography and mass spectrometry (GC-MS) studied its high molecular weight polycyclic aromatic hydrocarbons in 2000, and NASA scientists concluded that as much as 75% of the organic compounds in Nakhla "may not be recent terrestrial contamination".
This caused additional interest in this meteorite, so in 2006, NASA managed to obtain an additional and larger sample from the London Natural History Museum. On this second sample, a large dendritic carbon content was observed. When the results and evidence were published in 2006, some independent researchers claimed that the carbon deposits are of biologic origin. It was remarked that since carbon is the fourth most abundant element in the Universe, finding it in curious patterns is not indicative or suggestive of biological origin.
Shergotty
The Shergotty meteorite, a Martian meteorite, fell on Earth on Shergotty, India on August 25, 1865, and was retrieved by witnesses almost immediately. It is composed mostly of pyroxene and thought to have undergone preterrestrial aqueous alteration for several centuries. Certain features in its interior suggest remnants of a biofilm and its associated microbial communities.
Yamato 000593
Yamato 000593 is the second largest meteorite from Mars found on Earth. Studies suggest the Martian meteorite was formed about 1.3 billion years ago from a lava flow on Mars. An impact occurred on Mars about 12 million years ago and ejected the meteorite from the Martian surface into space. The meteorite landed on Earth in Antarctica about 50,000 years ago. The mass of the meteorite is and it has been found to contain evidence of past water movement. At a microscopic level, spheres are found in the meteorite that are rich in carbon compared to surrounding areas that lack such spheres. The carbon-rich spheres may have been formed by biotic activity according to NASA scientists.
Ichnofossil-like structures
Organism–substrate interactions and their products are important biosignatures on Earth as they represent direct evidence of biological behaviour. It was the recovery of fossilized products of life-substrate interactions (ichnofossils) that has revealed biological activities in the early history of life on the Earth, e.g., Proterozoic burrows, Archean microborings and stromatolites. Two major ichnofossil-like structures have been reported from Mars, i.e. the stick-like structures from Vera Rubin Ridge and the microtunnels from Martian Meteorites.
Observations at Vera Rubin Ridge by the Mars Space Laboratory rover Curiosity show millimetric, elongate structures preserved in sedimentary rocks deposited in fluvio-lacustrine environments within Gale Crater. Morphometric and topologic data are unique to the stick-like structures among Martian geological features and show that ichnofossils are among the closest morphological analogues of these unique features. Nevertheless, available data cannot fully disprove two major abiotic hypotheses, that are sedimentary cracking and evaporitic crystal growth as genetic processes for the structures.
Microtunnels have been described from Martian meteorites. They consist of straight to curved microtunnels that may contain areas of enhanced carbon abundance. The morphology of the curved microtunnels is consistent with biogenic traces on Earth, including microbioerosion traces observed in basaltic glasses. Further studies are needed to confirm biogenicity.
Geysers
The seasonal frosting and defrosting of the southern ice cap results in the formation of spider-like radial channels carved on 1-meter thick ice by sunlight. Then, sublimed CO2 – and probably water – increase pressure in their interior producing geyser-like eruptions of cold fluids often mixed with dark basaltic sand or mud. This process is rapid, observed happening in the space of a few days, weeks or months, a growth rate rather unusual in geology – especially for Mars.
A team of Hungarian scientists propose that the geysers' most visible features, dark dune spots and spider channels, may be colonies of photosynthetic Martian microorganisms, which over-winter beneath the ice cap, and as the sunlight returns to the pole during early spring, light penetrates the ice, the microorganisms photosynthesize and heat their immediate surroundings. A pocket of liquid water, which would normally evaporate instantly in the thin Martian atmosphere, is trapped around them by the overlying ice. As this ice layer thins, the microorganisms show through grey. When the layer has completely melted, the microorganisms rapidly desiccate and turn black, surrounded by a grey aureole. The Hungarian scientists believe that even a complex sublimation process is insufficient to explain the formation and evolution of the dark dune spots in space and time. Since their discovery, fiction writer Arthur C. Clarke promoted these formations as deserving of study from an astrobiological perspective.
A multinational European team suggests that if liquid water is present in the spiders' channels during their annual defrost cycle, they might provide a niche where certain microscopic life forms could have retreated and adapted while sheltered from solar radiation. A British team also considers the possibility that organic matter, microbes, or even simple plants might co-exist with these inorganic formations, especially if the mechanism includes liquid water and a geothermal energy source. They also remark that the majority of geological structures may be accounted for without invoking any organic "life on Mars" hypothesis. It has been proposed to develop the Mars Geyser Hopper lander to study the geysers up close.
Forward contamination
Planetary protection of Mars aims to prevent biological contamination of the planet. A major goal is to preserve the planetary record of natural processes by preventing human-caused microbial introductions, also called forward contamination. There is abundant evidence as to what can happen when organisms from regions on Earth that have been isolated from one another for significant periods of time are introduced into each other's environment. Species that are constrained in one environment can thrive – often out of control – in another environment much to the detriment of the original species that were present. In some ways, this problem could be compounded if life forms from one planet were introduced into the totally alien ecology of another world.
The prime concern of hardware contaminating Mars derives from incomplete spacecraft sterilization of some hardy terrestrial bacteria (extremophiles) despite best efforts. Hardware includes landers, crashed probes, end-of-mission disposal of hardware, and the hard landing of entry, descent, and landing systems. This has prompted research on survival rates of radiation-resistant microorganisms including the species Deinococcus radiodurans and genera Brevundimonas, Rhodococcus, and Pseudomonas under simulated Martian conditions. Results from one of these experimental irradiation experiments, combined with previous radiation modeling, indicate that Brevundimonas sp. MV.7 emplaced only 30 cm deep in Martian dust could survive the cosmic radiation for up to 100,000 years before suffering 106 population reduction. The diurnal Mars-like cycles in temperature and relative humidity affected the viability of Deinococcus radiodurans cells quite severely. In other simulations, Deinococcus radiodurans also failed to grow under low atmospheric pressure, under 0 °C, or in the absence of oxygen.
Survival under simulated Martian conditions
Since the 1950s, researchers have used containers that simulate environmental conditions on Mars to determine the viability of a variety of lifeforms on Mars. Such devices, called "Mars jars" or "Mars simulation chambers", were first described and used in U.S. Air Force research in the 1950s by Hubertus Strughold, and popularized in civilian research by Joshua Lederberg and Carl Sagan.
On April 26, 2012, scientists reported that an extremophile lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR). The ability to survive in an environment is not the same as the ability to thrive, reproduce, and evolve in that same environment, necessitating further study.
Although numerous studies point to resistance to some of Mars conditions, they do so separately, and none has considered the full range of Martian surface conditions, including temperature, pressure, atmospheric composition, radiation, humidity, oxidizing regolith including perchlorates, and others, all at the same time and in combination. Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly.
Water salinity and temperature
Astrobiologists funded by NASA are researching the limits of microbial life in solutions with high salt concentrations at low temperature. Any body of liquid water under the polar ice caps or underground is likely to exist under high hydrostatic pressure and have a significant salt concentration. They know that the landing site of Phoenix lander was found to be regolith cemented with water ice and salts, and the soil samples likely contained magnesium sulfate, magnesium perchlorate, sodium perchlorate, potassium perchlorate, sodium chloride and calcium carbonate. Earth bacteria capable of growth and reproduction in the presence of highly salted solutions, called halophile or "salt-lover", were tested for survival using salts commonly found on Mars and at decreasing temperatures. The species tested include Halomonas, Marinococcus, Nesterenkonia, and Virgibacillus. Laboratory simulations show that whenever multiple Martian environmental factors are combined, the survival rates plummet quickly, however, halophile bacteria were grown in a lab in water solutions containing more than 25% of salts common on Mars, and starting in 2019, the experiments will incorporate exposure to low temperature, salts, and high pressure.
Mars-like regions on Earth
On 21 February 2023, scientists reported the findings of a "dark microbiome" of unfamiliar microorganisms in the Atacama Desert in Chile, a Mars-like region of Earth.
Missions
Mars-2
Mars-1 was the first spacecraft launched to Mars in 1962, but communication was lost while en route to Mars. With Mars-2 and Mars-3 in 1971–1972, information was obtained on the nature of the surface rocks and altitude profiles of the surface density of the soil, its thermal conductivity, and thermal anomalies detected on the surface of Mars. The program found that its northern polar cap has a temperature below and that the water vapor content in the atmosphere of Mars is five thousand times less than on Earth. No signs of life were found.
Signs of life of the Mars space program AMS from orbit were not found. The descent vehicle Mars-2 crashed on landing, the descent vehicle Mars-3 launched 1.5 minutes after landing in the Ptolemaeus crater, but worked only 14.5 seconds/
Mariner 4
Mariner 4 probe performed the first successful flyby of the planet Mars, returning the first pictures of the Martian surface in 1965. The photographs showed an arid Mars without rivers, oceans, or any signs of life. Further, it revealed that the surface (at least the parts that it photographed) was covered in craters, indicating a lack of plate tectonics and weathering of any kind for the last 4 billion years. The probe also found that Mars has no global magnetic field that would protect the planet from potentially life-threatening cosmic rays. The probe was able to calculate the atmospheric pressure on the planet to be about 0.6 kPa (compared to Earth's 101.3 kPa), meaning that liquid water could not exist on the planet's surface. After Mariner 4, the search for life on Mars changed to a search for bacteria-like living organisms rather than for multicellular organisms, as the environment was clearly too harsh for these.
Viking orbiters
Liquid water is necessary for known life and metabolism, so if water was present on Mars, the chances of it having supported life may have been determinant. The Viking orbiters found evidence of possible river valleys in many areas, erosion and, in the southern hemisphere, branched streams.
Viking biological experiments
The primary mission of the Viking probes of the mid-1970s was to carry out experiments designed to detect microorganisms in Martian soil because the favorable conditions for the evolution of multicellular organisms ceased some four billion years ago on Mars. The tests were formulated to look for microbial life similar to that found on Earth. Of the four experiments, only the Labeled Release (LR) experiment returned a positive result, showing increased 14CO2 production on first exposure of soil to water and nutrients. All scientists agree on two points from the Viking missions: that radiolabeled 14CO2 was evolved in the Labeled Release experiment, and that the GCMS detected no organic molecules. There are vastly different interpretations of what those results imply: A 2011 astrobiology textbook notes that the GCMS was the decisive factor due to which "For most of the Viking scientists, the final conclusion was that the Viking missions failed to detect life in the Martian soil."
Norman Horowitz was the head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life.
One of the designers of the Labeled Release experiment, Gilbert Levin, believes his results are a definitive diagnostic for life on Mars. Levin's interpretation is disputed by many scientists. A 2006 astrobiology textbook noted that "With unsterilized Terrestrial samples, though, the addition of more nutrients after the initial incubation would then produce still more radioactive gas as the dormant bacteria sprang into action to consume the new dose of food. This was not true of the Martian soil; on Mars, the second and third nutrient injections did not produce any further release of labeled gas." Other scientists argue that superoxides in the soil could have produced this effect without life being present. An almost general consensus discarded the Labeled Release data as evidence of life, because the gas chromatograph and mass spectrometer, designed to identify natural organic matter, did not detect organic molecules. More recently, high levels of organic chemicals, particularly chlorobenzene, were detected in powder drilled from one of the rocks, named "Cumberland", analyzed by the Curiosity rover. The results of the Viking mission concerning life are considered by the general expert community as inconclusive.
In 2007, during a Seminar of the Geophysical Laboratory of the Carnegie Institution (Washington, D.C., US), Gilbert Levin's investigation was assessed once more. Levin still maintains that his original data were correct, as the positive and negative control experiments were in order. Moreover, Levin's team, on April 12, 2012, reported a statistical speculation, based on old data—reinterpreted mathematically through cluster analysis—of the Labeled Release experiments, that may suggest evidence of "extant microbial life on Mars". Critics counter that the method has not yet been proven effective for differentiating between biological and non-biological processes on Earth so it is premature to draw any conclusions.
A research team from the National Autonomous University of Mexico headed by Rafael Navarro-González concluded that the GCMS equipment (TV-GC-MS) used by the Viking program to search for organic molecules, may not be sensitive enough to detect low levels of organics. Klaus Biemann, the principal investigator of the GCMS experiment on Viking wrote a rebuttal. Because of the simplicity of sample handling, TV–GC–MS is still considered the standard method for organic detection on future Mars missions, so Navarro-González suggests that the design of future organic instruments for Mars should include other methods of detection.
After the discovery of perchlorates on Mars by the Phoenix lander, practically the same team of Navarro-González published a paper arguing that the Viking GCMS results were compromised by the presence of perchlorates. A 2011 astrobiology textbook notes that "while perchlorate is too poor an oxidizer to reproduce the LR results (under the conditions of that experiment perchlorate does not oxidize organics), it does oxidize, and thus destroy, organics at the higher temperatures used in the Viking GCMS experiment." Biemann has written a commentary critical of this Navarro-González paper as well, to which the latter have replied; the exchange was published in December 2011.
Phoenix lander, 2008
The Phoenix mission landed a robotic spacecraft in the polar region of Mars on May 25, 2008, and it operated until November 10, 2008. One of the mission's two primary objectives was to search for a "habitable zone" in the Martian regolith where microbial life could exist, the other main goal being to study the geological history of water on Mars. The lander has a 2.5 meter robotic arm that was capable of digging shallow trenches in the regolith. There was an electrochemistry experiment which analysed the ions in the regolith and the amount and type of antioxidants on Mars. The Viking program data indicate that oxidants on Mars may vary with latitude, noting that Viking 2 saw fewer oxidants than Viking 1 in its more northerly position. Phoenix landed further north still.
Phoenixs preliminary data revealed that Mars soil contains perchlorate, and thus may not be as life-friendly as thought earlier. The pH and salinity level were viewed as benign from the standpoint of biology. The analysers also indicated the presence of bound water and CO2. A recent analysis of Martian meteorite EETA79001 found 0.6 ppm ClO4−, 1.4 ppm ClO3−, and 16 ppm NO3−, most likely of Martian origin. The ClO3− suggests presence of other highly oxidizing oxychlorines such as ClO2− or ClO, produced both by UV oxidation of Cl and X-ray radiolysis of ClO4−. Thus only highly refractory and/or well-protected (sub-surface) organics are likely to survive. In addition, recent analysis of the Phoenix WCL showed that the Ca(ClO4)2 in the Phoenix soil has not interacted with liquid water of any form, perhaps for as long as 600 Myr. If it had, the highly soluble Ca(ClO4)2 in contact with liquid water would have formed only CaSO4. This suggests a severely arid environment, with minimal or no liquid water interaction.
Mars Science Laboratory (Curiosity rover)
The Mars Science Laboratory mission is a NASA project that launched on November 26, 2011, the Curiosity rover, a nuclear-powered robotic vehicle, bearing instruments designed to assess past and present habitability conditions on Mars. The Curiosity rover landed on Mars on Aeolis Palus in Gale Crater, near Aeolis Mons (a.k.a. Mount Sharp), on August 6, 2012.
On December 16, 2014, NASA reported the Curiosity rover detected a "tenfold spike", likely localized, in the amount of methane in the Martian atmosphere. Sample measurements taken "a dozen times over 20 months" showed increases in late 2013 and early 2014, averaging "7 parts of methane per billion in the atmosphere". Before and after that, readings averaged around one-tenth that level. In addition, low levels of chlorobenzene (), were detected in powder drilled from one of the rocks, named "Cumberland", analyzed by the Curiosity rover.
Mars 2020 (Perseverance rover)
The NASA Mars 2020 mission includes the Perseverance rover. Launched on July 30, 2020 it is intended to investigate an astrobiologically relevant ancient environment on Mars. This includes its surface geological processes and history, and an assessment of its past habitability and the potential for preservation of biosignatures within accessible geological materials. Perseverance has been on Mars for .
The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available.
Future astrobiology missions
ExoMars is a European-led multi-spacecraft programme currently under development by the European Space Agency (ESA) and the Roscosmos for launch in 2016 and 2020. Its primary scientific mission will be to search for possible biosignatures on Mars, past or present. A rover with a core drill will be used to sample various depths beneath the surface where liquid water may be found and where microorganisms or organic biosignatures might survive cosmic radiation. The program was suspended in 2022, and is unlikely to launch before 2028.
Mars sample-return mission – The best life detection experiment proposed is the examination on Earth of a soil sample from Mars. However, the difficulty of providing and maintaining life support over the months of transit from Mars to Earth remains to be solved. Providing for still unknown environmental and nutritional requirements is daunting, so it was concluded that "investigating carbon-based organic compounds would be one of the more fruitful approaches for seeking potential signs of life in returned samples as opposed to culture-based approaches."
Human colonization of Mars
Some of the main reasons for colonizing Mars include economic interests, long-term scientific research best carried out by humans as opposed to robotic probes, and sheer curiosity. Surface conditions and the presence of water on Mars make it arguably the most hospitable of the planets in the Solar System, other than Earth. Human colonization of Mars would require in situ resource utilization (ISRU); A NASA report states that "applicable frontier technologies include robotics, machine intelligence, nanotechnology, synthetic biology, 3-D printing/additive manufacturing, and autonomy. These technologies combined with the vast natural resources should enable, pre- and post-human arrival ISRU to greatly increase reliability and safety and reduce cost for human colonization of Mars."
Interactive Mars map
| Physical sciences | Solar System | Astronomy |
464073 | https://en.wikipedia.org/wiki/Hair%20follicle | Hair follicle | The hair follicle is an organ found in mammalian skin. It resides in the dermal layer of the skin and is made up of 20 different cell types, each with distinct functions. The hair follicle regulates hair growth via a complex interaction between hormones, neuropeptides, and immune cells. This complex interaction induces the hair follicle to produce different types of hair as seen on different parts of the body. For example, terminal hairs grow on the scalp and lanugo hairs are seen covering the bodies of fetuses in the uterus and in some newborn babies. The process of hair growth occurs in distinct sequential stages: anagen is the active growth phase, catagen is the regression of the hair follicle phase, telogen is the resting stage, exogen is the active shedding of hair phase and kenogen is the phase between the empty hair follicle and the growth of new hair.
The function of hair in humans has long been a subject of interest and continues to be an important topic in society, developmental biology and medicine. Of all mammals, humans have the longest growth phase of scalp hair compared to hair growth on other parts of the body. For centuries, humans have ascribed esthetics to scalp hair styling and dressing and it is often used to communicate social or cultural norms in societies. In addition to its role in defining human appearance, scalp hair also provides protection from UV sun rays and is an insulator against extremes of hot and cold temperatures. Differences in the shape of the scalp hair follicle determine the observed ethnic differences in scalp hair appearance, length and texture.
There are many human diseases in which abnormalities in hair appearance, texture or growth are early signs of local disease of the hair follicle or systemic illness. Well known diseases of the hair follicle include alopecia or hair loss, hirsutism or excess hair growth and lupus erythematosus.
Structure
The position and distribution of hair follicles varies over the body. For example, the skin of the palms and soles does not have hair follicles whereas skin of the scalp, forearms, legs and genitalia has abundant hair follicles. There are many structures that make up the hair follicle. Anatomically, the triad of hair follicle, sebaceous gland and arrector pili muscle make up the pilosebaceous unit.
A hair follicle consists of :
The papilla is a large structure at the base of the hair follicle. The papilla is made up mainly of connective tissue and a capillary loop. Cell division in the papilla is either rare or non-existent.
Around the papilla is the hair matrix.
A root sheath composed of an external and internal root sheath. The external root sheath appears empty with cuboid cells when stained with H&E stain. The internal root sheath is composed of three layers, Henle's layer, Huxley's layer, and an internal cuticle that is continuous with the outermost layer of the hair fiber.
The bulge is located in the outer root sheath at the insertion point of the arrector pili muscle. It houses several types of stem cells, which supply the entire hair follicle with new cells, and take part in healing the epidermis after a wound. Stem cells express the marker LGR5+ in vivo.
Other structures associated with the hair follicle include the cup in which the follicle grows known as the infundibulum, the arrector pili muscles, the sebaceous glands, and the apocrine sweat glands. Hair follicle receptors sense the position of the hair.
Attached to the follicle is a tiny bundle of muscle fiber called the arrector pili. This muscle is responsible for causing the follicle lissis to become more perpendicular to the surface of the skin, and causing the follicle to protrude slightly above the surrounding skin (piloerection) and a pore encased with skin oil. This process results in goose bumps (or goose flesh).
Also attached to the follicle is a sebaceous gland, which produces the oily or waxy substance sebum. The higher the density of the hair, the more sebaceous glands that are found.
Variation
There are ethnic differences in several different hair characteristics. The differences in appearance and texture of hair are due to many factors: the position of the hair bulb relative to the hair follicle, size and shape of the dermal papilla, and the curvature of the hair follicle. The scalp hair follicle in people of European descent is elliptical in shape and, therefore, produces straight or wavy hair, whereas the scalp hair follicle of people of African descent is more curvy, resulting in the growth of tightly curled hair.
Development
In utero, the epithelium and underlying mesenchyme interact to form hair follicles.
Aging
A key aspect of hair loss with age is the aging of the hair follicle. Ordinarily, hair follicle renewal is maintained by the stem cells associated with each follicle. Aging of the hair follicle appears to be primed by a sustained cellular response to the DNA damage that accumulates in renewing stem cells during aging. This damage response involves the proteolysis of type XVII collagen by neutrophil elastase in response to the DNA damage in the hair follicle stem cells. Proteolysis of collagen leads to elimination of the damaged cells and then to terminal hair follicle miniaturization.
Hair growth
Hair grows in cycles of various phases: anagen is the growth phase; catagen is the involuting or regressing phase; and telogen, the resting or quiescent phase (names derived using the Greek prefixes ana-, kata-, and telos- meaning up, down, and end respectively). Each phase has several morphologically and histologically distinguishable sub-phases. Prior to the start of cycling is a phase of follicular morphogenesis (formation of the follicle). There is also a shedding phase, or exogen, that is independent of anagen and telogen in which one or several hairs that might arise from a single follicle exits. Normally up to 85% of the hair follicles are in anagen phase, while 10–14% are in telogen and 1–2% in catagen. The cycle's length varies on different parts of the body. For eyebrows, the cycle is completed in around 4 months, while it takes the scalp 3–4 years to finish; this is the reason eyebrow hair have a much shorter length limit compared to hair on the head. Growth cycles are controlled by a chemical signal like epidermal growth factor. DLX3 is a crucial regulator of hair follicle differentiation and cycling.
Anagen phase
Anagen is the active growth phase of hair follicles during which the root of the hair is dividing rapidly, adding to the hair shaft. During this phase the hair grows about 1 cm every 28 days. A hair pulled out in this phase will typically have the root sheath attached to it which appears as a clear gel coating the first few mm of the hair from its base; this may be misidentified as the follicle, the root or the sebaceous gland by non-health care professionals. Scalp hair stays in this active phase of growth for 2–7 years; this period is genetically determined. At the end of the anagen phase an unknown signal causes the follicle to go into the catagen phase.
Catagen phase
The catagen phase is a short transition stage that occurs at the end of the anagen phase. It signals the end of the active growth of a hair. This phase lasts for about 2–3 weeks while the hair converts to a club hair, which is formed during the catagen phase when the part of the hair follicle in contact with the lower portion of the hair becomes attached to the hair shaft. A bulb of keratin attaches to the bottom tip of the hair and keeps it in place while a new hair begins to grow below it. A hair pulled out in this phase will have the bulb of keratin attached to it which appears as a small white ball on the end of the hair. This process cuts the hair off from its blood supply and from the cells that produce new hair. When a club hair is completely formed, about a 2-week process, the hair follicle enters the telogen phase.
Telogen phase
The telogen phase is the resting phase of the hair follicle, about three months. When the body is subjected to extreme stress, as much as 70 percent of hair can prematurely enter the telogen phase and begin to fall, causing a noticeable loss of hair. This condition is called telogen effluvium. The club hair is the final product of a hair follicle in the telogen stage, and is a dead, fully keratinized hair. Fifty to one-hundred club hairs are shed daily from a normal scalp.
Timeline
Scalp: The time these phases last varies from person to person. Different hair color and follicle shape affects the timings of these phases.
Anagen phase, 2–8 years (occasionally much longer)
Catagen phase, 2–3 weeks
Telogen phase, around 3 months
Eyebrows:
Anagen phase, 4–7 months
Catagen phase, 3–4 weeks
Telogen phase, about 9 months
Clinical significance
Disease
There are many human diseases in which abnormalities in hair appearance, texture or growth are early signs of local disease of the hair follicle or systemic illness. Well known diseases of the hair follicle include alopecia or hair loss, hirsutism or excess hair growth, and lupus erythematosus. Therefore, understanding the function of the normal hair follicle is fundamental to diagnosing and treating many dermatologic and systemic diseases with hair abnormalities. Studies of Witka et al. 2020 has shown the role of microbiome in the biology, immunology and diseases of scalp hair follicle. Studies further shown that change in hair follicle microbiome result into scalp disease like; Seborrheic dermatitis of the scalp and dandruff, Folliculitis decalvans, Androgenetic alopecia, Scalp psoriasis and Alopecia areata.
Hair restoration
Hair follicles form the basis of the two primary methods of hair transplantation in hair restoration, Follicular Unit Transplantation (FUT) and follicular unit extraction (FUE). In each of these methods, naturally occurring groupings of one to four hairs, called follicular units, are extracted from the hair restoration patient and then surgically implanted in the balding area of the patient's scalp, known as the recipient area. These follicles are extracted from donor areas of the scalp, or other parts of the body, which are typically resistant to the miniaturization effects of the hormone DHT. It is this miniaturization of the hair shaft that is the primary predictive indicator of androgenetic alopecia, commonly referred to as male pattern baldness or male hair loss. When these DHT-resistant follicles are transplanted to the recipient area, they continue to grow hair in the normal hair cycle, thus providing the hair restoration patient with permanent, naturally-growing hair.
While hair transplantation dates back to the 1950s, and plucked human hair follicle cell culture in vitro to the early 1980s, it was not until 1995 when hair transplantation using individual follicular units was introduced into medical literature.
| Biology and health sciences | Integumentary system | Biology |
464075 | https://en.wikipedia.org/wiki/Sebaceous%20gland | Sebaceous gland | A sebaceous gland or oil gland is a microscopic exocrine gland in the skin that opens into a hair follicle to secrete an oily or waxy matter, called sebum, which lubricates the hair and skin of mammals. In humans, sebaceous glands occur in the greatest number on the face and scalp, but also on all parts of the skin except the palms of the hands and soles of the feet. In the eyelids, meibomian glands, also called tarsal glands, are a type of sebaceous gland that secrete a special type of sebum into tears. Surrounding the female nipples, areolar glands are specialized sebaceous glands for lubricating the nipples. Fordyce spots are benign, visible, sebaceous glands found usually on the lips, gums and inner cheeks, and genitals.
Structure
Location
In humans, sebaceous glands are found throughout all areas of the skin, except the palms of the hands and soles of the feet. There are two types of sebaceous glands: those connected to hair follicles and those that exist independently.
Sebaceous glands are found in hair-covered areas, where they are connected to hair follicles. One or more glands may surround each hair follicle, and the glands themselves are surrounded by arrector pili muscles, forming a pilosebaceous unit. The glands have an acinar structure (like a many-lobed berry), in which multiple glands branch off a central duct. The glands deposit sebum on the hairs and bring it to the skin surface along the hair shaft. The structure, consisting of hair, hair follicles, arrector pili muscles, and sebaceous glands, is an epidermal invagination known as a pilosebaceous unit.
Sebaceous glands are also found in hairless areas (glabrous skin) of the eyelids, nose, penis, labia minora, the inner mucosal membrane of the cheek, and nipples. Some sebaceous glands have unique names. Sebaceous glands on the lip and mucosa of the cheek, and on the genitalia, are known as Fordyce spots, and glands on the eyelids are known as meibomian glands. Sebaceous glands of the breast are also known as Montgomery's glands.
Development
Sebaceous glands are first visible from the 13th to the 16th week of fetal development, as bulgings off hair follicles. Sebaceous glands develop from the same tissue that gives rise to the epidermis of the skin. Overexpression of the signalling factors Wnt, Myc and SHH all increase the likelihood of sebaceous gland presence.
The sebaceous glands of a human fetus secrete a substance called vernix caseosa, a waxy, translucent white substance coating the skin of newborns. After birth, activity of the glands decreases until there is almost no activity during ages two–six years, and then increases to a peak of activity during puberty, due to heightened levels of androgens.
Function
Relative to keratinocytes that make up the hair follicle, sebaceous glands are composed of huge cells with many large vesicles that contain the sebum. These cells express Na+ and Cl− ion channels, ENaC and CFTR (see Fig. 6 and Fig. 7 in reference).
Sebaceous glands secrete the oily, waxy substance called sebum () that is made of triglycerides, wax esters, squalene, and metabolites of fat-producing cells. Sebum lubricates the skin and hair of mammals. Sebaceous secretions in conjunction with apocrine glands also play an important thermoregulatory role. In hot conditions, the secretions emulsify the sweat produced by the eccrine sweat glands and this produces a sheet of sweat that is not readily lost in drops of sweat. This is of importance in delaying dehydration. In colder conditions, the nature of sebum becomes more lipid, and in coating the hair and skin, rain is effectively repelled.
Sebum is produced in a holocrine process, in which cells within the sebaceous gland rupture and disintegrate as they release the sebum and the cell remnants are secreted together with the sebum. The cells are constantly replaced by mitosis at the base of the duct.
Sebum
Sebum is secreted by the sebaceous gland in humans. It is primarily composed of triglycerides (≈41%), wax esters (≈26%), squalene (≈12%), and free fatty acids (≈16%). The composition of sebum varies across species. Wax esters and squalene are unique to sebum and not produced as final products anywhere else in the body. Sapienic acid is a sebum fatty acid that is unique to humans, and is implicated in the development of acne. Sebum is odorless, but its breakdown by bacteria can produce strong odors.
Sex hormones are known to affect the rate of sebum secretion; androgens such as testosterone have been shown to stimulate secretion, and estrogens have been shown to inhibit secretion. Dihydrotestosterone acts as the primary androgen in the prostate and in hair follicles.
Immune function and nutrition
Sebaceous glands are part of the body's integumentary system and serve to protect the body against microorganisms. Sebaceous glands secrete acids that form the acid mantle. This is a thin, slightly acidic film on the surface of the skin that acts as a barrier to microbes that might penetrate the skin. The pH of the skin is between 4.5 and 6.2, an acidity that helps to neutralize the alkaline nature of contaminants. Sebaceous lipids help maintain the integrity of the skin barrier and supply vitamin E to the skin.
Unique sebaceous glands
During the last three months of fetal development, the sebaceous glands of the fetus produce vernix caseosa, a waxy white substance that coats the skin to protect it from amniotic fluid.
The areolar glands are in the areola that surrounds the nipple in the female breast. These glands secrete an oily fluid that lubricates the nipple, and also secrete volatile compounds that are thought to serve as an olfactory stimulus for the newborn. During pregnancy and lactation these glands, also called Montgomery's glands, become enlarged.
Meibomian glands, in the eyelids, secrete a form of sebum called meibum onto the eye, that slows the evaporation of tears. They also serve to create an airtight seal when the eyes are closed, and their lipid quality also prevents the eyelids from sticking together. They attach directly to the follicles of the eyelashes, which are arranged vertically within the tarsal plates of the eyelids.
Fordyce spots, or Fordyce granules, are ectopic sebaceous glands found on the genitals and oral mucosa. They show themselves as yellowish-white milia (milk spots).
Earwax is partly composed of sebum produced by glands in the ear canal. These secretions are viscous and have a high lipid content, which provides good lubrication.
Clinical significance
Sebaceous glands are involved in skin problems such as acne and keratosis pilaris. In the skin pores, sebum and keratin can create a hyperkeratotic plug called a comedo.
Acne
Acne is a common occurrence, particularly during puberty in teenagers, and is thought to relate to an increased production of sebum due to hormonal factors. The increased production of sebum can lead to a blockage of the sebaceous gland duct. This can cause a comedo (commonly called a blackhead or a whitehead), which can lead to infection, particularly by the bacteria Cutibacterium acnes. This can inflame the comedones, which then change into the characteristic acne lesions. Comedones generally occur on the areas with more sebaceous glands, particularly the face, shoulders, upper chest and back. Comedones may be "black" or "white" depending on whether the entire pilosebaceous unit, or just the sebaceous duct, is blocked. Sebaceous filaments—innocuous build-ups of sebum—are often mistaken for whiteheads.
There are many treatments available for acne from reducing sugars in the diet, to medications that include antibiotics, benzoyl peroxide, retinoids, and hormonal treatments. Retinoids reduce the amount of sebum produced by the sebaceous glands. Should the usual treatments fail, the presence of the Demodex mite could be looked for as the possible cause.
Other
Other conditions that involve the sebaceous glands include:
Seborrhoea refers to overactive sebaceous glands, a cause of oily skin or hair.
Sebaceous hyperplasia, referring to excessive proliferation of the cells within the glands, and visible macroscopically as small papules on the skin, particularly on the forehead, nose and cheeks.
Seborrhoeic dermatitis, a chronic, usually mild form of dermatitis effected by changes in the sebaceous glands. In newborn infants, seborrhoea dermatitis can occur as cradle cap.
Seborrheic-like psoriasis (also known as "Sebopsoriasis", and "Seborrhiasis") is a skin condition characterized by psoriasis with an overlapping seborrheic dermatitis.
Sebaceous adenoma, a benign slow-growing tumour—which may, however, in rare cases be a precursor to a cancer syndrome known as Muir–Torre syndrome.
Sebaceous carcinoma, an uncommon and aggressive cutaneous tumour.
Sebaceous cyst is a term used to refer to both an epidermoid cyst and a pilar cyst, though neither of these contain sebum, only keratin and do not originate in the sebaceous gland and so are not true sebaceous cysts. A true sebaceous cyst is relatively rare and is known as a steatocystoma.
Nevus sebaceous, a hairless region or plaque on the scalp or skin, caused by an overgrowth of sebaceous glands. The condition is congenital and the plaque becomes thicker into adulthood.
Phymatous rosacea is a cutaneous condition characterized by an overgrowth of sebaceous glands.
History
The word sebaceous, meaning 'consisting of sebum', was first termed in 1728 and comes from the Latin for 'tallow'. Sebaceous glands have been documented since at least 1746 by Jean Astruc, who defined them as "...the glands which separate the fat." He describes them in the oral cavity and on the head, eyelids, and ears, as "universally" acknowledged. Astruc describes them being blocked by "small animals" that are "implanted" in the excretory ducts and attributes their presence in the oral cavity to apthous ulcers, noting that "these glands naturally [secrete] a viscous humour, which puts on various colours and consistencies... in its natural state is very mild, balsamic, and intended to wet and lubricate the mouth". In The Principles of Physiology 1834, Andrew Combe noted that the glands were not present in the palms of the hands or soles of the feet.
Other animals
The preputial glands of mice and rats are large modified sebaceous glands that produce pheromones used for territorial marking. These and the scent glands in the flanks of hamsters have a similar composition to human sebaceous glands, are androgen responsive, and have been used as a basis for study. Some species of bat, including the Mexican free-tailed, have a specialized sebaceous gland occurring on the throat called a "gular gland". This gland is present more frequently in males than females, and it is hypothesized that the secretions of the gland are used for scent-marking.
Sebaceous adenitis is an autoimmune disease that affects sebaceous glands. It is mainly known to occur in dogs, particularly poodles and akitas, where it is thought to be generally autosomal recessively inherited. It has also been described in cats, and one report describes this condition in a rabbit. In these animals, it causes hair loss, though the nature and distribution of the hair loss differs greatly.
| Biology and health sciences | Exocrine system | Biology |
464150 | https://en.wikipedia.org/wiki/Pluvial%20lake | Pluvial lake | A pluvial lake is a body of water that accumulated in a basin because of a greater moisture availability resulting from changes in temperature and/or precipitation. These intervals of greater moisture availability are not always contemporaneous with glacial periods. Pluvial lakes are typically closed lakes that occupied endorheic basins. Pluvial lakes that have since evaporated and dried out may also be referred to as paleolakes.
Etymology
The word comes from the Latin pluvia, which means "rain".
Geology
Pluvial lakes represent changes in the hydrological cycle: wet cycles generate large lakes, and dry cycles cause the lakes to recede. Accumulated sediments show the variation in water level. During glacial periods, when the lake level is fairly high, mud sediments will settle out and be deposited. At times in between glaciers (interglacial), salt deposits may be present because of the arid climate and the evaporation of lakewater.
Several pluvial lakes formed in what is now the southwestern United States during the glaciation of the late Pleistocene. One of these was Lake Bonneville in western Utah, which covered roughly . When Lake Bonneville was at its maximum water level, it was higher than the Great Salt Lake.
Fresh water mollusks have been found in mud deposits from Searles Lake in California and suggest that the water temperature was about 7 degrees Fahrenheit (or 4 degrees Celsius) cooler than current temperatures. Radiocarbon dating of the youngest mud beds yield dates from 24,000 to 12,000 years ago.
Formation
When warm air from arid regions meets chilled air from glaciers, cloudy, cool, rainy weather is created beyond the terminus of the glacier. That humid climate was present during the last glacial period in North America and caused more precipitation than evaporation. The increase in rainfall fills the drainage basin and forms a lake.
During interglacial periods, the climate becomes arid once more and causes the lakes to evaporate and dry up.
| Physical sciences | Hydrology | Earth science |
464301 | https://en.wikipedia.org/wiki/Pinus%20strobus | Pinus strobus | Pinus strobus, commonly called the eastern white pine, northern white pine, white pine, Weymouth pine (British), and soft pine is a large pine native to eastern North America. It occurs from Newfoundland, Canada, west through the Great Lakes region to southeastern Manitoba and Minnesota, United States, and south along the Appalachian Mountains and upper Piedmont to northernmost Georgia and very rare in some of the higher elevations in northeastern Alabama. It is considered rare in Indiana.
The Haudenosaunee maintain the tree as the central symbol of their multinational confederation, calling it the "Tree of Peace", where the Seneca use the name o’sóä’ and the Kanienʼkehá:ka call it onerahtase'ko:wa. Within the Wabanaki Confederacy, the Mi'kmaq use the term guow to name the tree, both the Wolastoqewiyik and Peskotomuhkatiyik call it kuw or kuwes, and the Abenaki use the term kowa.
It is known as the "Weymouth pine" in the United Kingdom, after Captain George Weymouth of the British Royal Navy, who brought its seeds to England from Maine in 1605.
Distribution
P. strobus is found in the nearctic temperate broadleaf and mixed forests biome of eastern North America. It prefers well-drained or sandy soils and humid climates, but can also grow in boggy areas and rocky highlands. In mixed forests, this dominant tree towers over many others, including some of the large broadleaf hardwoods. It provides food and shelter for numerous forest birds, such as the red crossbill, and small mammals such as squirrels.
Fossilized white pine leaves and pollen have been discovered by Brian Axsmith, a paleobotanist at the University of South Alabama, in the Gulf Coastal Plain, where the tree no longer occurs.
Eastern white pine forests originally covered much of north-central and northeastern North America. Only 1% of the old-growth forests remain after the extensive logging operations from the 18th century to early 20th century.
Old-growth forests, or virgin stands, are protected in Great Smoky Mountains National Park. Other protected areas with known virgin forests, as confirmed by the Eastern Native Tree Society, include Algonquin Provincial Park, Quetico Provincial Park, Algoma Highlands in Ontario, and Sainte-Marguerite River Old Forest in Quebec, Canada; Estivant Pines, Huron Mountains, Porcupine Mountains State Park, and Sylvania Wilderness Area in the Upper Peninsula of Michigan, United States; Hartwick Pines State Park in the Lower Peninsula of Michigan; Menominee Indian Reservation in Wisconsin; Lost 40 Scientific and Natural Area (SNA) and Boundary Waters Canoe Area Wilderness in Minnesota; White Pines State Park, Illinois; Cook Forest State Park, Hearts Content Scenic Area, and Anders Run Natural Area in Pennsylvania; and the Linville Gorge Wilderness in North Carolina, United States.
Small groves or individual specimens of old-growth eastern white pines are found across the range of the species in the USA, including in Ordway Grove, Maine; Ice Glen, Massachusetts; and Adirondack Park, New York. Many sites with conspicuously large specimens represent advanced old-field ecological succession. The tall stands in Mohawk Trail State Forest and William Cullen Bryant Homestead in Massachusetts are examples.
As an introduced species, P. strobus is now naturalizing in the Outer Western Carpathians subdivision of the Carpathian Mountains in Czech Republic and southern Poland. It has spread from specimens planted as ornamental trees.
Description
Like most members of the white pine group, Pinus subgenus Strobus, the leaves ("needles") are coniferous, occurring in fascicles (bundles) of five, or rarely three or four, with a deciduous sheath. The leaves are flexible, bluish-green, finely serrated, and long.
The seed cones are slender, long (rarely longer than that) and broad when open, and have scales with a rounded apex and slightly reflexed tip, often resinous. The seeds are long, with a slender wing, and are dispersed by wind. Cone production peaks every 3 to 5 years.
The branches are spaced about every 18 inches on the trunk with five or six branches appearing like spokes on a wagon wheel. Eastern white pine is self-fertile, but seeds produced this way tend to result in weak, stunted, and malformed seedlings. Mature trees are often 200–250 years old, and some live over 400 years. A tree growing near Syracuse, New York, was dated to 458 years old in the late 1980s and trees in Michigan and Wisconsin were dated to roughly 500 years old.
Dimensions
The eastern white pine has been described as the tallest tree in eastern North America, perhaps sharing the prize with the deciduous tulip tree whose range overlaps with eastern white pine in a few areas. In natural precolonial stands, the pine is reported to have grown as tall as . No means exist for accurately documenting the height of trees from these times, but eastern white pine may have reached this height on rare occasions. Even greater heights have been reported in popular, but unverifiable, accounts such as Robert Pike's Tall Trees, Tough Men.
Total trunk volumes of the largest specimens are around , with some past giants possibly reaching . Photographic analysis of giants suggests volumes closer to .
Height
P. strobus grows about annually between the ages of 15 and 45 years, with slower height increments before and after that age range. The tallest presently living specimens are tall, as determined by the Native Tree Society (NTS). Prior to their exploitation, it was common for white pines in northern Wisconsin to reach heights of over . Three locations in the Southeastern United States and one site in the Northeastern United States have trees that are tall.Common height of 80 feet or more.
The southern Appalachian Mountains have the most locations and the tallest trees in the present range of P. strobus. One survivor is a specimen known as the "Boogerman Pine" in the Cataloochee Valley of Great Smoky Mountains National Park. At tall, it is the tallest accurately measured tree in North America east of the Rocky Mountains, though this conflicts with citations for Liriodendron tulipifera. It has been climbed and measured by tape drop by the NTS. Before Hurricane Opal broke its top in October 1995, Boogerman Pine was tall, as determined by Will Blozan and Robert Leverett using ground-based measurements.
The tallest specimens in Hartwick Pines State Park in Michigan are tall.
In the northeastern USA, eight sites in four states currently have trees over tall, as confirmed by the NTS. The Cook Forest State Park of Pennsylvania has the most numerous collection of eastern white pines in the Northeast, with 110 trees measuring that height or more. The park's "Longfellow Pine" is the tallest presently living eastern white pine in the Northeast, at tall, as determined by tape drop. The Mohawk Trail State Forest of Massachusetts has 83 trees measuring or more tall, of which six exceed . The "Jake Swamp Tree" located there is tall. The NTS maintains precise measurements of it. A private property in Claremont, New Hampshire, has approximately 60 specimens that are at least , with the tallest being .
Diameter
Diameters of the larger pines range from , which translates to a circumference (girth) range of . However, single-trunked white pines in both the Northeast and Southeast with diameters over are exceedingly rare. Notable big pine sites of or less often have no more than two or three trees in the 1.2- to 1.4-m-diameter class. Common diameter of 2-3 feet.
Unconfirmed reports from the colonial era gave diameters of virgin white pines of up to .
Mortality and disease
Because the eastern white pine tree is somewhat resistant to fire, mature survivors are able to reseed burned areas. In pure stands, mature trees usually have no branches on the lower half of their trunks. The white pine weevil (Pissodes strobi) and white pine blister rust (Cronartium ribicola), an introduced fungus, can damage or kill these trees.
Blister rust
Mortality from white pine blister rust in mature pine groves was often 50–80% during the early 20th century. The fungus must spend part of its lifecycle on alternate hosts of the genus Ribes, the native gooseberry or wild currant. Foresters proposed that if all the alternate host plants were removed, white pine blister rust might be eliminated. A very determined campaign was mounted, and all land owners in commercial pine-growing regions were encouraged to uproot and kill all native gooseberry and wild currant plants. The ramifications for wildlife and habitat ecology were of less concern at the time than timber-industry protection.
Today, native wild currants are relatively rare plants in New England, and planting wild currants or wild gooseberries is strongly discouraged, or even illegal in some jurisdictions. As an alternative, new strains of commercial currants have been developed that are highly resistant to white pine blister rust. Mortality in white pines from rust is only about 3% today.
Conservation status in the United States
Old white pines are treasured in the United States. An American National Natural Landmark, Cook Forest State Park in Pennsylvania, contains the tallest known tree in the Northeastern United States, a white pine named Longfellow Pine. Some white pines in Wisconsin are over 200 years old. Although widely planted as a landscape tree in the Midwestern states, native White pine is listed as "rare or uncommon" in Indiana.
Historical uses
Lumber
In the 19th century, the harvesting of Midwestern white pine forests played a major role in America's westward expansion through the Great Plains. A quarter-million white pines were harvested and sent to lumber yards in Chicago in a single year.
The white pine had aesthetic appeal to contemporary naturalists such as Henry David Thoreau ("There is no finer tree.") Beyond that, it had commercial applications. It was considered "the most sought and most widely utilized of the various forest growths of the northwest." Descriptions of its uses are quoted below from a 19th-century source:
The species was imported in 1620 to England by Captain George Weymouth, who planted it for a timber crop, but had little success because of white pine blister rust disease.
Old-growth pine in the Americas, of various Pinus species, was a highly desired wood since huge, knot-free boards were the rule rather than the exception. Pine was common and easy to cut, thus many colonial homes used pine for paneling, floors, and furniture. Pine was also a favorite tree of loggers, since pine logs can still be processed in a lumber mill a year or more after being cut down. In contrast, most hardwood trees such as cherry, maple, oak, and ash must be cut into 1" thick boards immediately after felling, or else large cracks will develop in the trunk which can render the wood worthless.
Although eastern white pine was frequently used for flooring in buildings constructed before the U.S. Civil War, the wood is soft and tends to cup over time with wear. George Washington opted for the much harder southern yellow pine at Mount Vernon, instead.
Mast pines
During the 17th and 18th centuries, tall white pines in the Thirteen Colonies became known as "mast pines". Marked by agents of the Crown with the broad arrow, a mast pine was reserved for the British Royal Navy. Special barge-like vessels were built to ship tall white pines to England. The wood was often squared to better fit in the holds of these ships. A mast was about at the butt and at the top, while a mast was by on its ends.
By 1719, Portsmouth, New Hampshire, had become the hub of pine logging and shipping. Portsmouth shipped 199 masts to England that year. In all, about 4500 masts were sent to England.
The eastern white pine played a significant role in the events leading to the American Revolution. Marking of large white pines by the Crown had become controversial in the colonies by the first third of the 18th century. In 1734, the King's men were assaulted and beaten in Exeter, New Hampshire, in what was to be called the Mast Tree Riot. Colonel David Dunbar had been in the town investigating a stock pile of white pine in a pond and the ownership of the local timber mill before caning two townspeople. In 1772, the sheriff of Hillsborough County, New Hampshire, was sent to the town of Weare to arrest mill owners for the illegal possession of large white pines. That night, as the sheriff slept at the Pine Tree Tavern, he was attacked and nearly killed by an angry mob of colonists. This act of rebellion, later to become known as the Pine Tree Riot, may have fueled the Boston Tea Party in 1773.
After the Revolutionary War, the fledgling United States used large white pines to build out its own navy. The masts of the USS Constitution were originally made of eastern white pine. The original masts were single trees, but were later replaced by laminated spars to better withstand cannonballs.
In colonial times, an unusually large, lone, white pine was found in coastal South Carolina along the Black River, far east of its southernmost normal range. The king's mark was carved into it, giving rise to the town of Kingstree.
Eastern white pine is now widely grown in plantation forestry within its native area.
Contemporary uses
Lumber
Timber framing
Eastern white pine has often been used for timber frames, and is available in large sizes. Eastern white pine timbers are not particularly strong, so timbers increase in size to handle loads applied. This species accepts stains better than most, but it has little rot resistance, so should be used only in dry conditions.
Characteristics
Freshly cut eastern white pine is yellowish white or a pale straw color, but pine wood which has aged many years tends to darken to a deep, rich, golden tan. Occasionally, one can find light brown pine boards with unusual yellowish-golden or reddish-brown hues. This is the famous "pumpkin pine". Slow growing pines in old-growth forests are thought to accumulate colored products in the heartwood, but genetic factors and soil conditions may also play a role in rich color development.
This wood is also favored by patternmakers for its easy working.
Ecology
Cottontail, snowshoe rabbits, porcupines, can eat the bark. Red squirrels can eat the cones by extracting the seeds. Seeds are eaten by crossbills, pine siskin, and white tailed deer.
Foods and medicines
Eastern white pine needles exceed the amount of vitamin C of lemons and oranges and make an excellent herbal tea. The cambium is edible. It is also a source of resveratrol. Linnaeus noted in the 18th century that cattle and pigs fed pine bark bread grew well, but he personally did not like the taste.
Pine tar is produced by slowly burning pine roots, branches, or small trunks in a partially smothered flame. Pine tar mixed with beer can be used to remove tapeworms (flat worms) or nematodes (round worms). Pine tar mixed with sulfur is useful to treat dandruff, and marketed in present-day products. Pine tar can also be processed to make turpentine.
Native American traditional uses
The name "Adirondack", an Iroquois word that means tree-eater, referred to their neighbors (more commonly known as the Algonquians) who collected the inner bark of P. strobus, Picea rubens, and others during times of winter starvation. The white, soft inner bark (cambial layer) was carefully separated from the hard, dark brown bark and dried. When pounded, this product can be used as flour or added to stretch other starchy products.
The young staminate cones were stewed by the Ojibwe Indians with meat, and were said to be sweet and not pitchy. In addition, the seeds are sweet and nutritious, but not as tasty as those of some of the western nut pines.
Pine resin (sap) has been used by various tribes to waterproof baskets, pails, and boats. The Ojibwe also used pine resin to successfully treat infections and even gangrenous wounds, because pine resin apparently has a number of quite efficient antimicrobials. Generally, a wet pulp from the inner bark, or pine tar mixed with beeswax or butter was applied to wounds and used as a salve to prevent infection.
Cultivation
P. strobus is cultivated by plant nurseries as an ornamental tree, for planting in gardens and parks. The species is low-maintenance and rapid-growing as a specimen tree. With regular shearing, it can also be trained as a hedge. Some cultivars are used in bonsai.
Cultivars
Cultivars have been selected for small to dwarf mature forms, and foliage color characteristics. They include:
P. strobus Nana group – tall by wide MBG: Pinus strobus (Nana Group)
P. strobus 'Macopin' – tall & wide. MBG:Pinus strobus 'Macopin'
P. strobus 'Paul Waxman' – tall & wide. MBG: Pinus strobus 'Paul Waxman'
Christmas trees
Smaller specimens are popular as live Christmas trees. Eastern white pines are noted for holding their needles well, even long after being harvested. They also are well suited for people with allergies, as they give little to no aroma. A standard tree takes around 6 to 8 years to grow in ideal conditions. Sheared varieties are usually desired because of their stereotypical Christmas tree conical shape, as naturally grown ones can be sparse, or grow bushy in texture. The branches of the eastern white pine are also widely used in making holiday wreaths and garlands because of their soft, feathery needles.
Water filtration
White pine xylem has been used as a filter to clean certain bacteria from contaminated water. Hemacytometer tests revealed that at least 99.9% of bacteria tested were rejected after being passed through white pine xylem.
Symbolism
The eastern white pine is the provincial tree of Ontario, Canada.
In the United States, it is the state tree of Maine (as of 1945) and Michigan (as of 1955). Its "pine cone and tassel" is also the state flower of Maine, and is prominently featured on the state's license plates. Sprigs of eastern white pine were worn as badges as a symbol of Vermont identity during the Vermont Republic and are depicted in a stained-glass window in the Vermont State House, on the Flag of Vermont, and on the naval ensign of the Commonwealth of Massachusetts and the state of Maine. The 1901 Maine Flag prominently featured the tree during its brief tenure as Maine's state flag. The Maine State Guard also use the tree in their uniform badges.
The indigenous Haudenosaunee (Iroquois Confederation) named it the "Tree of Peace". Since 2017, it has appeared on the flag and seal of the city of Montreal to represent the indigenous peoples of the area.
| Biology and health sciences | Pinaceae | Plants |
464400 | https://en.wikipedia.org/wiki/Aquificota | Aquificota | The Aquificota phylum is a diverse collection of bacteria that live in harsh environmental settings. The name Aquificota was given to this phylum based on an early genus identified within this group, Aquifex (“water maker”), which is able to produce water by oxidizing hydrogen. They have been found in springs, pools, and oceans. They are autotrophs, and are the primary carbon fixers in their environments. These bacteria are Gram-negative, non-spore-forming rods. They are true bacteria (domain Bacteria) as opposed to the other inhabitants of extreme environments, the Archaea.
Taxonomy
The Aquificota currently contain 15 genera and 42 validly published species. The phylum comprises three class with each of them having their respective order. Aquificales consists of the families Aquificaceae and Hydrogenothermaceae, while the Desulfurobacteriaceae are the only family within the Desulfurobacteriales. Thermosulfidibacter takaii is not assigned to a family within the phylum based on its phylogenetic distinctness from both orders. It is currently classified as a member of Aquificales, but it has shown more physiological similarity to the Desulfobacteriaceae.
Molecular signatures and phylogenetic position
Comparative genomic studies have identified several conserved signature indels (CSIs) that are specific for all species belonging to the phylum Aquificota and provide potential molecular markers. The order Aquificales can be distinguished from Desulfobacteriales by several CSIs across different proteins that are specific for each group. Additional CSIs have been found at the family level, and can be used to demarcate Aquificota and Hydrogenothermaceae from all other bacteria. In parallel with the observed CSI distribution, the orders within the Aquificota are also physiologically distinct from one another. Members of the Desulfurobacteriales are strict anaerobes that exclusively oxidize hydrogen for energy, whereas those belonging to the Aquificales are microaerophilic, and capable of oxidizing other compounds (such as sulfur or thiosulfate) in addition to hydrogen.
Several CSIs have also been identified that are specific for the species from the Aquificota and provide potential molecular markers for this phylum. Additionally, a 51-amino-acid insertion has been identified in SecA preprotein translocase which is shared by all members of the Aquificota, as well as all members of the order Thermotogales. Phylogenetic studies demonstrated that the presence of the same CSI within these two unrelated groups of bacteria is not due to lateral gene transfer, rather the CSI likely developed independently in these two groups of thermophiles due to selective pressure. The 51 amino acid insertion is located on the surface of SecA near the binding site of ADP/ATP. Molecular dynamic simulations revealed a network water molecules forming an intermediate interaction between residues of the 51 aa CSI and ADP molecules, which serves to stabilize the hydrogen bonds formed between ADP/ATP and the protein. It is suggested that the network of hydrogen bonds formed between the water molecules, CSI residues and ADP/ATP helps to maintain ATP/ADP binding to the SecA protein at high temperatures, which contributes to the bacteria’s overall thermostability.
In the 16S rRNA gene trees, the Aquificota species branch in the proximity of the phylum Thermotogota (another phylum comprising hyperthermophilic organisms) close to the archaeal-bacterial branch point. However, a close relationship of the Aquificota to the Thermotogota and the deep branching of the Aquificota is not supported by some phylogenetic studies based upon other gene/protein sequences and also by CSIs in several highly conserved universal proteins 16S-23S-5S operons. In contrast to the very high G+C content of their rRNAs (i.e. more than 62%), which is required for stability of their secondary structures at high growth temperatures, the inference that the Aquificota do not constitute a deep-branch lineage is also independently strongly supported by CSIs in a number of important proteins (viz. Hsp70, Hsp60, RpoB, RpoB and AlaRS), which support its placement in the proximity of the phylum Proteobacteria, particularly the Campylobacterota. A specific relationship of the Aquificota to the Proteobacteria is supported by a two-amino-acid CSI in the protein inorganic pyrophosphatase, which is uniquely found in species from these two phyla. Cavalier-Smith has also suggested that the Aquificota are closely related to the Proteobacteria. In contrast to the above cited analyses that are based on a few indels or on single genes, analyses on informational genes, which appeared to be less often transferred to the Aquifex lineage than noninformational genes, most often placed the Aquificales close to the Thermotogales. These authors explain the frequently observed grouping of Aquificota with Campylobacterota as result of frequent horizontal gene transfer due to shared ecological niches.
Along with the Thermotogota, the Aquificota are thermophilic eubacteria.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI)
| Biology and health sciences | Gram-negative bacteria | Plants |
464545 | https://en.wikipedia.org/wiki/Strepsirrhini | Strepsirrhini | Strepsirrhini or Strepsirhini (; ) is a suborder of primates that includes the lemuriform primates, which consist of the lemurs of Madagascar, galagos ("bushbabies") and pottos from Africa, and the lorises from India and southeast Asia. Collectively they are referred to as strepsirrhines. Also belonging to the suborder are the extinct adapiform primates which thrived during the Eocene in Europe, North America, and Asia, but disappeared from most of the Northern Hemisphere as the climate cooled. Adapiforms are sometimes referred to as being "lemur-like", although the diversity of both lemurs and adapiforms does not support this comparison.
Strepsirrhines are defined by their "wet" (moist) rhinarium (the tip of the snout) – hence the colloquial but inaccurate term "wet-nosed" – similar to the rhinaria of canines and felines. They also have a smaller brain than comparably sized simians, large olfactory lobes for smell, a vomeronasal organ to detect pheromones, and a bicornuate uterus with an epitheliochorial placenta. Their eyes contain a reflective layer to improve their night vision, and their eye sockets include a ring of bone around the eye, but they lack a wall of thin bone behind it. Strepsirrhine primates produce their own vitamin C, whereas haplorhine primates must obtain it from their diets. Lemuriform primates are characterized by a toothcomb, a specialized set of teeth in the front, lower part of the mouth mostly used for combing fur during grooming.
Many of today's living strepsirrhines are endangered due to habitat destruction, hunting for bushmeat, and live capture for the exotic pet trade. Both living and extinct strepsirrhines are behaviorally diverse, although all are primarily arboreal (tree-dwelling). Most living lemuriforms are nocturnal, while most adapiforms were diurnal. Both living and extinct groups primarily fed on fruit, leaves, and insects.
Etymology
The taxonomic name Strepsirrhini derives from the Greek strepsis "a turning round" and rhis "nose, snout, (in pl.) nostrils" (GEN rhinos), which refers to the appearance of the sinuous (comma-shaped) nostrils on the rhinarium or wet nose. The name was first used by French naturalist Étienne Geoffroy Saint-Hilaire in 1812 as a subordinal rank comparable to Platyrrhini (New World monkeys) and Catarrhini (Old World monkeys). In his description, he mentioned "Les narines terminales et sinueuses" ("Nostrils terminal and winding").
When British zoologist Reginald Innes Pocock revived Strepsirrhini and defined Haplorhini in 1918, he omitted the second "r" from both ("Strepsirhini" and "Haplorhini" instead of "Strepsirrhini" and "Haplorrhini"), although he did not remove the second "r" from Platyrrhini or Catarrhini, both of which were also named by É. Geoffroy in 1812. Following Pocock, many researchers continued to spell Strepsirrhini with a single "r" until primatologists Paulina Jenkins and Prue Napier pointed out the error in 1987.
Evolutionary history
Strepsirrhines include the extinct adapiforms and the lemuriform primates, which include lemurs and lorisoids (lorises, pottos, and galagos). Strepsirrhines diverged from the haplorhine primates near the beginning of the primate radiation between 55 and 90 mya. Older divergence dates are based on genetic analysis estimates, while younger dates are based on the scarce fossil record. Lemuriform primates may have evolved from either cercamoniines or sivaladapids, both of which were adapiforms that may have originated in Asia. They were once thought to have evolved from adapids, a more specialized and younger branch of adapiform primarily from Europe.
Lemurs rafted from Africa to Madagascar between 47 and 54 mya, whereas the lorises split from the African galagos around 40 mya and later colonized Asia. The lemuriforms, and particularly the lemurs of Madagascar, are often portrayed inappropriately as "living fossils" or as examples of "basal", or "inferior" primates. These views have historically hindered the understanding of mammalian evolution and the evolution of strepsirrhine traits, such as their reliance on smell (olfaction), characteristics of their skeletal anatomy, and their brain size, which is relatively small. In the case of lemurs, natural selection has driven this isolated population of primates to diversify significantly and fill a rich variety of ecological niches, despite their smaller and less complex brains compared to simians.
Unclear origin
The divergence between strepsirrhines, simians, and tarsiers likely followed almost immediately after primates first evolved. Although few fossils of living primate groups – lemuriforms, tarsiers, and simians – are known from the Early to Middle Eocene, evidence from genetics and recent fossil finds both suggest they may have been present during the early adaptive radiation.
The origin of the earliest primates that the simians and tarsiers both evolved from is a mystery. Both their place of origin and the group from which they emerged are uncertain. Although the fossil record demonstrating their initial radiation across the Northern Hemisphere is very detailed, the fossil record from the tropics (where primates most likely first developed) is very sparse, particularly around the time that primates and other major clades of eutherian mammals first appeared.
Lacking detailed tropical fossils, geneticists and primatologists have used genetic analyses to determine the relatedness between primate lineages and the amount of time since they diverged. Using this molecular clock, divergence dates for the major primate lineages have suggested that primates evolved more than 80–90 mya, nearly 40 million years before the first examples appear in the fossil record.
The early primates include both nocturnal and diurnal small-bodied species, and all were arboreal, with hands and feet specially adapted for maneuvering on small branches. Plesiadapiforms from the early Paleocene are sometimes considered "archaic primates", because their teeth resembled those of early primates and because they possessed adaptations to living in trees, such as a divergent big toe (hallux). Although plesiadapiforms were closely related to primates, they may represent a paraphyletic group from which primates may or may not have directly evolved, and some genera may have been more closely related to colugos, which are thought to be more closely related to primates.
The first true primates (euprimates) do not appear in the fossil record until the early Eocene (~55 mya), at which point they radiated across the Northern Hemisphere during a brief period of rapid global warming known as the Paleocene–Eocene Thermal Maximum. These first primates included Cantius, Donrussellia, Altanius, and Teilhardina on the northern continents, as well as the more questionable (and fragmentary) fossil Altiatlasius from Paleocene Africa. These earliest fossil primates are often divided into two groups, adapiforms and omomyiforms. Both appeared suddenly in the fossil record without transitional forms to indicate ancestry, and both groups were rich in diversity and were widespread throughout the Eocene.
The last branch to develop were the adapiforms, a diverse and widespread group that thrived during the Eocene (56 to 34 million years ago [mya]) in Europe, North America, and Asia. They disappeared from most of the Northern Hemisphere as the climate cooled: The last of the adapiforms died out at the end of the Miocene (~7 mya).
Adapiform evolution
Adapiform primates are extinct strepsirrhines that shared many anatomical similarities with lemurs. They are sometimes referred to as lemur-like primates, although the diversity of both lemurs and adapiforms do not support this analogy.
Like the living strepsirrhines, adapiforms were extremely diverse, with at least 30 genera and 80 species known from the fossil record as of the early 2000s. They diversified across Laurasia during the Eocene, some reaching North America via a land bridge.They were among the most common mammals found in the fossil beds from that time. A few rare species have also been found in northern Africa. The most basal of the adapiforms include the genera Cantius from North America and Europe and Donrussellia from Europe. The latter bears the most ancestral traits, so it is often considered a sister group or stem group of the other adapiforms.
Adapiforms are often divided into three major groups:
Adapids were most commonly found in Europe, although the oldest specimens (Adapoides from middle Eocene China) indicate that they most likely evolved in Asia and immigrated. They died out in Europe during the Grande Coupure, part of a significant extinction event at the end of the Eocene.
Notharctids, which most closely resembled some of Madagascar's lemurs, come from Europe and North America. The European branch is often referred to as cercamoniines. The North American branch thrived during the Eocene, but did not survive into the Oligocene. Like the adapids, the European branch were also extinct by the end of the Eocene.
Sivaladapids of southern and eastern Asia are best known from the Miocene, and the only adapiforms to survive past the Eocene/Oligocene boundary (~34 mya). Their relationship to the other adapiforms remains unclear. They had vanished before the end of the Miocene (~7 mya).
The relationship between adapiform and lemuriform primates has not been clearly demonstrated, so the position of adapiforms as a paraphyletic stem group is questionable. Both molecular clock data and new fossil finds suggest that the lemuriform divergence from the other primates and the subsequent lemur-lorisoid split both predate the appearance of adapiforms in the early Eocene. New calibration methods may reconcile the discrepancies between the molecular clock and the fossil record, favoring more recent divergence dates. The fossil record suggests that the strepsirrhine adapiforms and the haplorhine omomyiforms had been evolving independently before the early Eocene, although their most basal members share enough dental similarities to suggest that they diverged during the Paleocene (66–55 mya).
Lemuriform evolution
Lemuriform origins are unclear and debated. American paleontologist Philip Gingerich proposed that lemuriform primates evolved from one of several genera of European adapids based on similarities between the front lower teeth of adapids and the toothcomb of extant lemuriforms; however, this view is not strongly supported due to a lack of clear transitional fossils. Instead, lemuriforms may be descended from a very early branch of Asian cercamoniines or sivaladapids that migrated to northern Africa.
Until discoveries of three 40 million-year-old fossil lorisoids (Karanisia, Saharagalago, and Wadilemur) in the El Fayum deposits of Egypt between 1997 and 2005, the oldest known lemuriforms had come from the early Miocene (~20 mya) of Kenya and Uganda. These newer finds demonstrate that lemuriform primates were present during the middle Eocene in Afro-Arabia and that the lemuriform lineage and all other strepsirrhine taxa had diverged before then. Djebelemur from Tunisia dates to the late early or early middle Eocene (52 to 46 mya) and has been considered a cercamoniine, but also may have been a stem lemuriform. Azibiids from Algeria date to roughly the same time and may be a sister group of the djebelemurids. Together with Plesiopithecus from the late Eocene Egypt, the three may qualify as the stem lemuriforms from Africa.
Molecular clock estimates indicate that lemurs and the lorisoids diverged in Africa during the Paleocene, approximately 62 mya. Between 47 and 54 mya, lemurs dispersed to Madagascar by rafting. In isolation, the lemurs diversified and filled the niches often filled by monkeys and apes today. In Africa, the lorises and galagos diverged during the Eocene, approximately 40 mya. Unlike the lemurs in Madagascar, they have had to compete with monkeys and apes, as well as other mammals.
History of classification
The taxonomy of strepsirrhines is controversial and has a complicated history. Confused taxonomic terminology and oversimplified anatomical comparisons have created misconceptions about primate and strepsirrhine phylogeny, illustrated by the media attention surrounding the single "Ida" fossil in 2009.
Strepsirrhine primates were first grouped under the genus Lemur by Swedish taxonomist Carl Linnaeus in the 10th edition of Systema Naturae published in 1758. At the time, only three species were recognized, one of which (the colugo) is no longer recognized as a primate. In 1785, Dutch naturalist Pieter Boddaert divided the genus Lemur into two genera: Prosimia for the lemurs, colugos, and tarsiers and Tardigradus for the lorises. Ten years later, É. Geoffroy and Georges Cuvier grouped the tarsiers and galagos due to similarities in their hindlimb morphology, a view supported by German zoologist Johann Karl Wilhelm Illiger, who placed them in the family Macrotarsi while placing the lemurs and tarsiers in the family Prosimia (Prosimii) in 1811. The use of the tarsier-galago classification continued for many years until 1898, when Dutch zoologist Ambrosius Hubrecht demonstrated two different types of placentation (formation of a placenta) in the two groups.
English comparative anatomist William Henry Flower created the suborder Lemuroidea in 1883 to distinguish these primates from the simians, which were grouped under English biologist St. George Jackson Mivart's suborder Anthropoidea (=Simiiformes). According to Flower, the suborder Lemuroidea contained the families Lemuridae (lemurs, lorises, and galagos), Chiromyidae (aye-aye), and Tarsiidae (tarsiers). Lemuroidea was later replaced by Illiger's suborder Prosimii. Many years earlier, in 1812, É. Geoffroy first named the suborder Strepsirrhini, in which he included the tarsiers. This taxonomy went unnoticed until 1918, when Pocock compared the structure of the nose and reinstated the use of the suborder Strepsirrhini, while also moving the tarsiers and the simians into a new suborder, Haplorhini. It was not until 1953, when British anatomist William Charles Osman Hill wrote an entire volume on strepsirrhine anatomy, that Pocock's taxonomic suggestion became noticed and more widely used. Since then, primate taxonomy has shifted between Strepsirrhini-Haplorhini and Prosimii-Anthropoidea multiple times.
Most of the academic literature provides a basic framework for primate taxonomy, usually including several potential taxonomic schemes. Although most experts agree upon phylogeny, many disagree about nearly every level of primate classification.
Controversies
The most commonly recurring debate in primatology during the 1970s, 1980s, and early 2000s concerned the phylogenetic position of tarsiers compared to both simians and the other prosimians. Tarsiers are most often placed in either the suborder Haplorhini with the simians or in the suborder Prosimii with the strepsirrhines. Prosimii is one of the two traditional primate suborders and is based on evolutionary grades (groups united by anatomical traits) rather than phylogenetic clades, while the Strepsirrhini-Haplorrhini taxonomy was based on evolutionary relationships. Yet both systems persist because the Prosimii-Anthropoidea taxonomy is familiar and frequently seen in the research literature and textbooks.
Strepsirrhines are traditionally characterized by several symplesiomorphic (ancestral) traits not shared with the simians, particularly the rhinarium. Other symplesiomorphies include long snouts, convoluted maxilloturbinals, relatively large olfactory bulbs, and smaller brains. The toothcomb is a synapomorphy (shared, derived trait) seen among lemuriforms, although it is frequently and incorrectly used to define the strepsirrhine clade. Strepsirrhine primates are also united in possessing an epitheliochorial placenta. Unlike the tarsiers and simians, strepsirrhines are capable of producing their own and do not need it supplied in their diet. Further genetic evidence for the relationship between tarsiers and simians as a haplorhine clade is the shared possession of three SINE markers.
Because of their historically mixed assemblages which included tarsiers and close relatives of primates, both Prosimii and Strepsirrhini have been considered wastebasket taxa for "lower primates". Regardless, the strepsirrhine and haplorrhine clades are generally accepted and viewed as the preferred taxonomic division. Yet tarsiers still closely resemble both strepsirrhines and simians in different ways, and since the early split between strepsirrhines, tarsiers and simians is ancient and hard to resolve, a third taxonomic arrangement with three suborders is sometimes used: Prosimii, Tarsiiformes, and Anthropoidea. More often, the term "prosimian" is no longer used in official taxonomy, but is still used to illustrate the behavioral ecology of tarsiers relative to the other primates.
In addition to the controversy over tarsiers, the debate over the origins of simians once called the strepsirrhine clade into question. Arguments for an evolutionary link between adapiforms and simians made by paleontologists Gingerich, Elwyn L. Simons, Tab Rasmussen, and others could have potentially excluded adapiforms from Strepsirrhini. In 1975, Gingerich proposed a new suborder, Simiolemuriformes, to suggest that strepsirrhines are more closely related to simians than tarsiers. However, no clear relationship between the two had been demonstrated by the early 2000s. The idea reemerged briefly in 2009 during the media attention surrounding Darwinius masillae (dubbed "Ida"), a cercamoniine from Germany that was touted as a "missing link between humans and earlier primates" (simians and adapiforms). However, the cladistic analysis was flawed and the phylogenetic inferences and terminology were vague. Although the authors noted that Darwinius was not a "fossil lemur", they did emphasize the absence of a toothcomb, which adapiforms did not possess.
Infraordinal classification and clade terminology
Within Strepsirrhini, two common classifications include either two infraorders (Adapiformes and Lemuriformes) or three infraorders (Adapiformes, Lemuriformes, Lorisiformes). A less common taxonomy places the aye-aye (Daubentoniidae) in its own infraorder, Chiromyiformes. In some cases, plesiadapiforms are included within the order Primates, in which case Euprimates is sometimes treated as a suborder, with Strepsirrhini becoming an infraorder, and the Lemuriformes and others become parvorders. Regardless of the infraordinal taxonomy, Strepsirrhini is composed of three ranked superfamilies and 14 families, seven of which are extinct. Three of these extinct families included the recently extinct giant lemurs of Madagascar, many of which died out within the last 1,000 years following human arrival on the island.
When Strepsirrhini is divided into two infraorders, the clade containing all toothcombed primates can be called "lemuriforms". When it is divided into three infraorders, the term "lemuriforms" refers only to Madagascar's lemurs, and the toothcombed primates are referred to as either "crown strepsirrhines" or "extant strepsirrhines". Confusion of this specific terminology with the general term "strepsirrhine", along with oversimplified anatomical comparisons and vague phylogenetic inferences, can lead to misconceptions about primate phylogeny and misunderstandings about primates from the Eocene, as seen with the media coverage of Darwinius. Because the skeletons of adapiforms share strong similarities with those of lemurs and lorises, researchers have often referred to them as "primitive" strepsirrhines, lemur ancestors, or a sister group to the living strepsirrhines. They are included in Strepsirrhini, and are considered basal members of the clade. Although their status as true primates is not questioned, the questionable relationship between adapiforms and other living and fossil primates leads to multiple classifications within Strepsirrhini. Often, adapiforms are placed in their own infraorder due to anatomical differences with lemuriforms and their unclear relationship. When shared traits with lemuriforms (which may or may not be synapomorphic) are emphasized, they are sometimes reduced to families within the infraorder Lemuriformes (or superfamily Lemuroidea).
The first fossil primate described was the adapiform Adapis parisiensis by French naturalist Georges Cuvier in 1821, who compared it to a hyrax ("le Daman"), then considered a member of a now obsolete group called pachyderms. It was not recognized as a primate until it was reevaluated in the early 1870s. Originally, adapiforms were all included under the family Adapidae, which was divided into two or three subfamilies: Adapinae, Notharctinae, and sometimes Sivaladapinae. All North American adapiforms were lumped under Notharctinae, while the Old World forms were usually assigned to Adapinae. Around the 1990s, two distinct groups of European "adapids" began to emerge, based on differences in the postcranial skeleton and the teeth. One of these two European forms was identified as cercamoniines, which were allied with the notharctids found mostly in North America, while the other group falls into the traditional adapid classification. The three major adapiform divisions are now typically regarded as three families within Adapiformes (Notharctidae, Adapidae and Sivaladapidae), but other divisions ranging from one to five families are used as well.
Anatomy and physiology
Grooming apparatus
All lemuriforms possess a specialized dental structure called a "toothcomb", with the exception of the aye-aye, in which the structure has been modified into two continually growing (hypselodont) incisors (or canine teeth), similar to those of rodents. Often, the toothcomb is incorrectly used to characterize all strepsirrhines. Instead, it is unique to lemuriforms and is not seen among adapiforms.
Lemuriforms groom orally, and also possess a grooming claw on the second toe of each foot for scratching in areas that are inaccessible to the mouth and tongue. Adapiforms may have had a grooming claw, but there is little evidence of this. The toothcomb consists of either two or four procumbent lower incisors and procumbent lower canine teeth followed by a canine-shaped premolar. It is used to comb the fur during oral grooming. Shed hairs that accumulate between the teeth of the toothcomb are removed by the sublingua or "under-tongue". Adapiforms did not possess a toothcomb. Instead, their lower incisors varied in orientation – from somewhat procumbent to somewhat vertical – and the lower canines were projected upwards and were often prominent.
Eyes
Like all primates, strepsirrhine orbits (eye sockets) have a postorbital bar, a protective ring of bone created by a connection between the frontal and zygomatic bones. Both living and extinct strepsirrhines lack a thin wall of bone behind the eye, referred to as postorbital closure, which is only seen in haplorhine primates. Although the eyes of strepsirrhines point forward, giving stereoscopic vision, the orbits do not face fully forward. Among living strepsirrhines, most or all species are thought to possess a reflective layer behind the retina of the eye, called a tapetum lucidum (consisting of riboflavin crystals), which improves vision in low light, but they lack a fovea, which improves day vision. This differs from tarsiers, which lack a tapetum lucidum but possess a fovea. The relative size of cornea in strepsirrhines is similar to nonprimate nocturnal and cathemeral mammals, where as haplorhines have smaller relative sized cornea compared to most nonprimate mammals.
Skull
Strepsirrhine primates have a brain relatively comparable to or slightly larger in size than most mammals. Compared to simians, however, they have a relatively small brain-to-body size ratio. Strepsirrhines are also traditionally noted for their unfused mandibular symphysis (two halves of the lower jaw), however, fusion of the mandibular symphysis was common in adapiforms, notably Notharctus. Also, several extinct giant lemurs exhibited a fused mandibular symphysis.
Ears
Many nocturnal species have large, independently movable ears, although there are significant differences in sizes and shapes of the ear between species. The structure of the middle and inner ear of strepsirrhines differs between the lemurs and lorisoids. In lemurs, the tympanic cavity, which surrounds the middle ear, is expanded. This leaves the ectotympanic ring, which supports the eardrum, free within the auditory bulla. This trait is also seen in adapiforms. In lorisoids, however, the tympanic cavity is smaller and the ectotympanic ring becomes attached to the edge of the auditory bulla. The tympanic cavity in lorisoids also has two accessory air spaces, which are not present in lemurs.
Neck arteries
Both lorisoids and cheirogaleid lemurs have replaced the internal carotid artery with an enlarged ascending pharyngeal artery.
Ankle bones
Strepsirrhines also possess distinctive features in their tarsus (ankle bones) that differentiate them from haplorhines, such as a sloping talo-fibular facet (the face where the talus bone and fibula meet) and a difference in the location of the position of the flexor fibularis tendon on the talus. These differences give strepsirrhines the ability to make more complex rotations of the ankle and indicate that their feet are habitually inverted, or turned inward, an adaptation for grasping vertical supports.
Sex characteristics
Sexual dichromatism (different coloration patterns between males and females) can be seen in most brown lemur species, but otherwise lemurs show very little if any difference in body size or weight between sexes. This lack of sexual dimorphism is not characteristic of all strepsirrhines. Some adapiforms were sexually dimorphic, with males bearing a larger sagittal crest (a ridge of bone on the top of the skull to which jaw muscles attach) and canine teeth. Lorisoids exhibit some sexual dimorphism, but males are typically no more than 20 percent larger than females.
Rhinarium and olfaction
Strepsirrhines have a long snout that ends in a moist and touch-sensitive rhinarium, similar to that of dogs and many other mammals. The rhinarium is surrounded by vibrissae that are also sensitive to touch. Convoluted maxilloturbinals on the inside of their nose filter, warm, and moisten the incoming air, while olfactory receptors of the main olfactory system lining the ethmoturbinals detect airborne smells. The olfactory bulbs of lemurs are comparable in size to those of other arboreal mammals.
The surface of the rhinarium does not have any olfactory receptors, so it is not used for smell in terms of detecting volatile substances. Instead, it has sensitive touch receptors (Merkel cells). The rhinarium, upper lip, and gums are tightly connected by a fold of mucous membrane called the philtrum, which runs from the tip of the nose to the mouth. The upper lip is constrained by this connection and has fewer nerves to control movement, which leaves it less mobile than the upper lips of simians. The philtrum creates a gap (diastema) between the roots of the first two upper incisors.
The strepsirrhine rhinarium can collect relatively non-volatile, fluid-based chemicals (traditionally categorized as pheromones) and transmit them to the vomeronasal organ (VNO), which is located below and in front of the nasal cavity, above the mouth. The VNO is an encased duct-like structure made of cartilage and is isolated from the air passing through the nasal cavity. The VNO is connected to the mouth through nasopalatine ducts (which communicate via the incisive foramen), which pass through the hard palate at the top, front of the mouth. Fluids traveling from the rhinarium to the mouth and then up the nasopalatine ducts to the VNO are detected, and information is relayed to the accessory olfactory bulb, which is relatively large in strepsirrhines. From the accessory olfactory bulb, information is sent to the amygdala, which handles emotions, and then to the hypothalamus, which handles basic body functions and metabolic processes. This neural pathway differs from that used by the main olfactory system.
All lemuriforms have a VNO, as do tarsiers and some New World monkeys. Adapiforms exhibit the gap between the upper incisors, which indicates the presence of a VNO, but there is some disagreement over whether or not they possessed a rhinarium.
Reproductive physiology
Extant strepsirrhines have an epitheliochorial placenta, where the maternal blood does not come in direct contact with the fetal chorion like it does in the hemochorial placenta of haplorhines. The strepsirrhine uterus has two distinct chambers (bicornuate). Despite having similar gestation periods to comparably sized haplorhines, fetal growth rates are generally slower in strepsirrhines, which results in newborn offspring that are as little as one-third the size of haplorhine newborns. Extant strepsirrhines also have a lower basal metabolic rate, which elevates in females during gestation, putting greater demands on the mother.
Most primates have two mammary glands, but the number and positions vary between species within strepsirrhines. Lorises have two pairs, while others, like the ring-tailed lemur, have one pair on the chest (pectoral). The aye-aye also has two mammary glands, but they are located near the groin (inguinal). In females, the clitoris is sometimes enlarged and pendulous, resembling the male penis, which can make sex identification difficult for human observers. The clitoris may also have a bony structure in it, similar to the baculum (penis bone) in males. Most male primates have a baculum, but it is typically larger in strepsirrhines and usually forked at the tip.
Behavior
Approximately three-quarters of all extant strepsirrhine species are nocturnal, sleeping in nests made from dead leaves or tree hollows during the day. All of the lorisoids from continental Africa and Asia are nocturnal, a circumstance that minimizes their competition with the simian primates of the region, which are diurnal. The lemurs of Madagascar, living in the absence of simians, are more variable in their activity cycles. The aye-aye, mouse lemurs, woolly lemurs, and sportive lemurs are nocturnal, while ring-tailed lemurs and most of their kin, sifakas, and indri are diurnal. Yet some or all of the brown lemurs (Eulemur) are cathemeral, which means that they may be active during the day or night, depending on factors such as temperature and predation. Many extant strepsirrhines are well adapted for nocturnal activity due to their relatively large eyes; large, movable ears; sensitive tactile hairs; strong sense of smell; and the tapetum lucidum behind the retina. Among the adapiforms, most are considered diurnal, with the exception of Pronycticebus and Godinotia from Middle Eocene Europe, both of which had large orbits that suggest nocturnality.
Reproduction in most strepsirrhine species tends to be seasonal, particularly in lemurs. Key factors that affect seasonal reproduction include the length of the wet season, subsequent food availability, and the maturation time of the species. Like other primates, strepsirrhines are relatively slow breeders compared to other mammals. Their gestation period and interbirth intervals are usually long, and the young develop slowly, just like in haplorhine primates. Unlike simians, some strepsirrhines produce two or three offspring, although some produce only a single offspring. Those that produce multiple offspring tend to build nests for their young. These two traits are thought to be plesiomorphic (ancestral) for primates. The young are precocial (relatively mature and mobile) at birth, but not as coordinated as ungulates (hoofed mammals). Infant care by the mother is relatively prolonged compared to many other mammals, and in some cases, the infants cling to the mother's fur with their hands and feet.
Despite their relatively smaller brains compared to other primates, lemurs have demonstrated levels of technical intelligence in problem solving that are comparable to those seen in simians. However, their social intelligence differs, often emphasizing within-group competition over cooperation, which may be due to adaptations for their unpredictable environment. Although lemurs have not been observed using objects as tools in the wild, they can be trained to use objects as tools in captivity and demonstrate a basic understanding about the functional properties of the objects they are using.
Social systems and communication
The nocturnal strepsirrhines have been traditionally described as "solitary", although this term is no longer favored by the researchers who study them. Many are considered "solitary foragers", but many exhibit complex and diverse social organization, often overlapping home ranges, initiating social contact at night, and sharing sleeping sites during the day. Even the mating systems are variable, as seen in woolly lemurs, which live in monogamous breeding pairs. Because of this social diversity among these solitary but social primates, whose level of social interaction is comparable to that of diurnal simians, alternative classifications have been proposed to emphasize their gregarious, dispersed, or solitary nature.
Among extant strepsirrhines, only the diurnal and cathemeral lemurs have evolved to live in multi-male/multi-female groups, comparable to most living simians. This social trait, seen in two extant lemur families (Indriidae and Lemuridae), is thought to have evolved independently. Group sizes are smaller in social lemurs than in simians, and despite the similarities, the community structures differ. Female dominance, which is rare in simians, is fairly common in lemurs. Strepsirrhines spend a considerable amount of time grooming each other (allogrooming). When lemuriform primates groom, they lick the fur and then comb it with their toothcomb. They also use their grooming claw to scratch places they cannot reach with their mouth.
Like New World monkeys, strepsirrhines rely on scent marking for much of their communication. This involves smearing secretions from epidermal scent glands on tree branches, along with urine and feces. In some cases, strepsirrhines may anoint themselves with urine (urine washing). Body postures and gestures may be used, although the long snout, non-mobile lips, and reduced facial enervation restrict the use of facial expressions in strepsirrhines. Short-range calls, long-range calls, and alarm calls are also used. Nocturnal species are more constrained by the lack of light, so their communication systems differ from those of diurnal species, often using long-range calls to claim their territory.
Locomotion
Living strepsirrhines are predominantly arboreal, with only the ring-tailed lemur spending considerable time on the ground. Most species move around quadrupedally (on four legs) in the trees, including five genera of smaller, nocturnal lemurs. Galagos, indriids, sportive lemurs, and bamboo lemurs leap from vertical surfaces, and the indriids are highly specialized for vertical clinging and leaping. Lorises are slow-moving, deliberate climbers.
Analyses of extinct adapiforms postcranial skeletons suggest a variety of locomotor behavior. The European adapids Adapis, Palaeolemur, and Leptadapis shared adaptations for slow climbing like the lorises, although they may have been quadrupedal runners like small New World monkeys. Both Notharctus and Smilodectes from North America and Europolemur from Europe exhibit limb proportions and joint surfaces comparable to vertical clinging and leaping lemurs, but were not as specialized as indriids for vertical clinging, suggesting that they ran along branches and did not leap as much. Notharctids Cantius and Pronycticebus appear to have been agile arboreal quadrupeds, with adaptations comparable to the brown lemurs.
Diet
Primates primarily feed on fruits (including seeds), leaves (including flowers), and animal prey (arthropods, small vertebrates, and eggs). Diets vary markedly between strepsirrhine species. Like other leaf-eating (folivorous) primates, some strepsirrhines can digest cellulose and hemicellulose. Some strepsirrhines, such as the galagos, slender lorises, and angwantibos, are primarily insectivorous. Other species, such as fork-marked lemurs and needle-clawed bushbabies, specialize on tree gum, while indriids, sportive lemurs, and bamboo lemurs are folivores. Many strepsirrhines are frugivores (fruit eaters), and others, like the ring-tailed lemur and mouse lemurs, are omnivores, eating a mix of fruit, leaves, and animal matter.
Among the adapiforms, frugivory seems to have been the most common diet, particularly for medium-sized to large species, such as Cantius, Pelycodus and Cercamonius. Folivory was also common among the medium and large-sized adapiforms, including Smilodectes, Notharctus, Adapis and Leptadapis. Sharp cusps on the teeth of some of the smaller adapiforms, such as Anchomomys and Donrussellia, indicate that they were either partly or primarily insectivorous.
Distribution and habitat
The now extinct adapiform primates were primarily found across North America, Asia, and Europe, with a few species in Africa. They flourished during the Eocene when those regions were more tropical in nature, and they disappeared when the climate became cooler and drier. Today, the lemuriforms are confined in the tropics, ranging between 28° S to 26° N latitude. Lorises are found both in equatorial Africa and Southeast Asia, while the galagos are limited to the forests and woodlands of sub-Saharan Africa. Lemurs are endemic to Madagascar, although much of their diversity and habitat has been lost due to recent human activity.
As with nearly all primates, strepsirrhines typically reside in tropical rainforests. These habitats allow strepsirrhines and other primates to evolve diverse communities of sympatric species. In the eastern rainforests of Madagascar, as many as 11 or 12 species share the same forests, and prior to human arrival, some forests had nearly double that diversity. Several species of lemur are found in drier, seasonal forests, including the spiny forest on the southern tip of the island, although the lemur communities in these regions are not as rich.
Conservation
Like all other non-human primates, strepsirrhines face an elevated risk of extinction due to human activity, particularly deforestation in tropical regions. Much of their habitat has been converted for human use, such as agriculture and pasture. The threats facing strepsirrhine primates fall into three main categories: habitat destruction, hunting (for bushmeat or traditional medicine), and live capture for export or local exotic pet trade. Although hunting is often prohibited, the laws protecting them are rarely enforced. In Madagascar, local taboos known as fady sometimes help protect lemur species, although some are still hunted for traditional medicine.
In 2012, the International Union for Conservation of Nature (IUCN) announced that lemurs were the "most endangered mammals", due largely to elevated illegal logging and hunting following a political crisis in 2009. In Southeast Asia, slow lorises are threatened by the exotic pet trade and traditional medicine, in addition to habitat destruction. Both lemurs and slow lorises are protected from commercial international trade under CITES Appendix I.
Explanatory notes
| Biology and health sciences | Primates | null |
465008 | https://en.wikipedia.org/wiki/Eddy%20current | Eddy current | In electromagnetism, an eddy current (also called Foucault's current) is a loop of electric current induced within conductors by a changing magnetic field in the conductor according to Faraday's law of induction or by the relative motion of a conductor in a magnetic field. Eddy currents flow in closed loops within conductors, in planes perpendicular to the magnetic field. They can be induced within nearby stationary conductors by a time-varying magnetic field created by an AC electromagnet or transformer, for example, or by relative motion between a magnet and a nearby conductor. The magnitude of the current in a given loop is proportional to the strength of the magnetic field, the area of the loop, and the rate of change of flux, and inversely proportional to the resistivity of the material. When graphed, these circular currents within a piece of metal look vaguely like eddies or whirlpools in a liquid.
By Lenz's law, an eddy current creates a magnetic field that opposes the change in the magnetic field that created it, and thus eddy currents react back on the source of the magnetic field. For example, a nearby conductive surface will exert a drag force on a moving magnet that opposes its motion, due to eddy currents induced in the surface by the moving magnetic field. This effect is employed in eddy current brakes which are used to stop rotating power tools quickly when they are turned off. The current flowing through the resistance of the conductor also dissipates energy as heat in the material. Thus eddy currents are a cause of energy loss in alternating current (AC) inductors, transformers, electric motors and generators, and other AC machinery, requiring special construction such as laminated magnetic cores or ferrite cores to minimize them. Eddy currents are also used to heat objects in induction heating furnaces and equipment, and to detect cracks and flaws in metal parts using eddy-current testing instruments.
Origin of term
The term eddy current comes from analogous currents seen in water in fluid dynamics, causing localised areas of turbulence known as eddies giving rise to persistent vortices. Somewhat analogously, eddy currents can take time to build up and can persist for very long times in conductors due to their inductance.
History
The first person to observe eddy currents was François Arago (1786–1853), the President of the Council of Ministers of the 2nd French Republic during the brief period 10th May to June 24, 1848 (equivalent to the current position of the French Prime Minister), who was also a mathematician, physicist and astronomer. In 1824 he observed what has been called rotatory magnetism, and that most conductive bodies could be magnetized; these discoveries were completed and explained by Michael Faraday (1791–1867).
In 1834, Emil Lenz stated Lenz's law, which says that the direction of induced current flow in an object will be such that its magnetic field will oppose the change of magnetic flux that caused the current flow. Eddy currents produce a secondary field that cancels a part of the external field and causes some of the external flux to avoid the conductor.
French physicist Léon Foucault (1819–1868) is credited with having discovered eddy currents. In September 1855, he discovered that the force required for the rotation of a copper disc becomes greater when it is made to rotate with its rim between the poles of a magnet, the disc at the same time becoming heated by the eddy current induced in the metal. The first use of eddy current for non-destructive testing occurred in 1879 when David E. Hughes used the principles to conduct metallurgical sorting tests.
Theory
A magnet induces circular electric currents in a metal sheet moving through its magnetic field. The accompanying diagram shows a metal sheet moving to the right with velocity under a stationary magnet. The magnetic field (in green arrows) from the magnet's north pole passes down through the metal sheet.
Since the metal is moving, the magnetic flux through a given area of the sheet is changing. In particular, the part of the sheet moving toward the edge of the magnet (the left side) experiences an increase in magnetic flux density . This change in magnetic flux, in turn, induces a circular electromotive force (EMF) in the sheet, in accordance with Faraday's law of induction, exerting a force on the electrons in the sheet, causing a counterclockwise circular current in the sheet. This is an eddy current. Similarly, the part of the sheet moving away from the edge of the magnet (the right side) experiences a decrease in magnetic flux density , inducing a second eddy current, this time in a clockwise direction. Since the electrons have a negative charge, they move in the opposite direction to the conventional current shown by the arrows.
Another equivalent way to understand the origin of eddy currents is to see that the free charge carriers (electrons) in the metal sheet are moving with the sheet to the right, so the magnetic field exerts a sideways Lorentz force on them given by . Since the charge of the electrons is negative, by the right hand rule the force is to the right looking in the direction of motion of the sheet. So there is a flow of electrons toward the viewer under the magnet. This divides into two parts, flowing right and left around the magnet outside the magnetic field back to the far side of the magnet in two circular eddies. Since the electrons have a negative charge, the direction of conventional current arrows shown is in the opposite direction, toward the left under the magnet.
The electrons collide with the metal lattice atoms, exerting a drag force on the sheet proportional to its velocity. The kinetic energy used to overcome this drag is dissipated as heat by the currents flowing through the metal, so the metal gets warm under the magnet. As described by Ampère's circuital law, each of the circular currents in the sheet induces its own magnetic field (marked in blue arrows in the diagram).
Another way to understand the drag is to observe that in accordance with Lenz's law, the induced electromotive force must oppose the change in magnetic flux through the sheet. At the leading edge of the magnet (left side), the anti-clockwise current creates a magnetic field pointing up (as can be shown using the right hand rule), opposing the magnet's field. This causes a repulsive force to develop between the sheet and the leading edge of the magnet. In contrast, at the trailing edge (right side), the clockwise current causes a magnetic field pointed down, in the same direction as the magnet's field, resulting in an attractive force between the sheet and the trailing edge of the magnet. In both cases, the resulting force is not in the direction of motion of the sheet.
Properties
Eddy currents in conductors of non-zero resistivity generate heat as well as electromagnetic forces. The heat can be used for induction heating. The electromagnetic forces can be used for levitation, creating movement, or to give a strong braking effect. Eddy currents can also have undesirable effects, for instance power loss in transformers. In this application, they are minimized with thin plates, by lamination of conductors or other details of conductor shape.
Self-induced eddy currents are responsible for the skin effect in conductors. The latter can be used for non-destructive testing of materials for geometry features, like micro-cracks. A similar effect is the proximity effect, which is caused by externally induced eddy currents.
An object or part of an object experiences steady field intensity and direction where there is still relative motion of the field and the object (for example in the center of the field in the diagram), or unsteady fields where the currents cannot circulate due to the geometry of the conductor. In these situations charges collect on or within the object and these charges then produce static electric potentials that oppose any further current. Currents may be initially associated with the creation of static potentials, but these may be transitory and small.
Eddy currents generate resistive losses that transform some forms of energy, such as kinetic energy, into heat. This Joule heating reduces efficiency of iron-core transformers and electric motors and other devices that use changing magnetic fields. Eddy currents are minimized in these devices by selecting magnetic core materials that have low electrical conductivity (e.g., ferrites or iron powder mixed with resin) or by using thin sheets of magnetic material, known as laminations. Electrons cannot cross the insulating gap between the laminations and so are unable to circulate on wide arcs. Charges gather at the lamination boundaries, in a process analogous to the Hall effect, producing electric fields that oppose any further accumulation of charge and hence suppressing the eddy currents. The shorter the distance between adjacent laminations (i.e., the greater the number of laminations per unit area, perpendicular to the applied field), the greater the suppression of eddy currents.
The conversion of input energy to heat is not always undesirable, however, as there are some practical applications. One is in the brakes of some trains known as eddy current brakes. During braking, the metal wheels are exposed to a magnetic field from an electromagnet, generating eddy currents in the wheels. This eddy current is formed by the movement of the wheels. So, by Lenz's law, the magnetic field formed by the eddy current will oppose its cause. Thus the wheel will face a force opposing the initial movement of the wheel. The faster the wheels are spinning, the stronger the effect, meaning that as the train slows the braking force is reduced, producing a smooth stopping motion.
Induction heating makes use of eddy currents to provide heating of metal objects.
Power dissipation of eddy currents
Under certain assumptions (uniform material, uniform magnetic field, no skin effect, etc.) the power lost due to eddy currents per unit mass for a thin sheet or wire can be calculated from the following equation:
where
is the power lost per unit mass (W/kg),
is the peak magnetic field (T),
is the thickness of the sheet or diameter of the wire (m),
is the frequency (Hz),
is a constant equal to 1 for a thin sheet and 2 for a thin wire,
is the resistivity of the material (Ω m), and
is the density of the material (kg/m3).
This equation is valid only under the so-called quasi-static conditions, where the frequency of magnetisation does not result in the skin effect; that is, the electromagnetic wave fully penetrates the material.
Skin effect
In very fast-changing fields, the magnetic field does not penetrate completely into the interior of the material. This skin effect renders the above equation invalid. However, in any case increased frequency of the same value of field will always increase eddy currents, even with non-uniform field penetration.
The penetration depth for a good conductor can be calculated from the following equation:
where is the penetration depth (m), is the frequency (Hz), is the magnetic permeability of the material (H/m), and is the electrical conductivity of the material (S/m).
Diffusion equation
The derivation of a useful equation for modelling the effect of eddy currents in a material starts with the differential, magnetostatic form of Ampère's Law, providing an expression for the magnetizing field surrounding a current density :
Taking the curl on both sides of this equation and then using a common vector calculus identity for the curl of the curl results in
From Gauss's law for magnetism, , so
Using Ohm's law, , which relates current density to electric field in terms of a material's conductivity , and assuming isotropic homogeneous conductivity, the equation can be written as
Using the differential form of Faraday's law, , this gives
By definition, , where is the magnetization of the material and is the vacuum permeability. The diffusion equation therefore is
Applications
Electromagnetic braking
Eddy current brakes use the drag force created by eddy currents as a brake to slow or stop moving objects. Since there is no contact with a brake shoe or drum, there is no mechanical wear. However, an eddy current brake cannot provide a "holding" torque and so may be used in combination with mechanical brakes, for example, on overhead cranes. Another application is on some roller coasters, where heavy copper plates extending from the car are moved between pairs of very strong permanent magnets. Electrical resistance within the plates causes a dragging effect analogous to friction, which dissipates the kinetic energy of the car. The same technique is used in electromagnetic brakes in railroad cars and to quickly stop the blades in power tools such as circular saws. Using electromagnets, as opposed to permanent magnets, the strength of the magnetic field can be adjusted and so the magnitude of braking effect changed.
Repulsive effects and levitation
In a varying magnetic field, the induced currents exhibit diamagnetic-like repulsion effects. A conductive object will experience a repulsion force. This can lift objects against gravity, though with continual power input to replace the energy dissipated by the eddy currents. An example application is separation of aluminum cans from other metals in an eddy current separator. Ferrous metals cling to the magnet, and aluminum (and other non-ferrous conductors) are forced away from the magnet; this can separate a waste stream into ferrous and non-ferrous scrap metal.
With a very strong handheld magnet, such as those made from neodymium, one can easily observe a very similar effect by rapidly sweeping the magnet over a coin with only a small separation. Depending on the strength of the magnet, identity of the coin, and separation between the magnet and coin, one may induce the coin to be pushed slightly ahead of the magnet – even if the coin contains no magnetic elements, such as the US penny. Another example involves dropping a strong magnet down a tube of copper – the magnet falls at a dramatically slow pace.
In a perfect conductor with no resistance, surface eddy currents exactly cancel the field inside the conductor, so no magnetic field penetrates the conductor. Since no energy is lost in resistance, eddy currents created when a magnet is brought near the conductor persist even after the magnet is stationary, and can exactly balance the force of gravity, allowing magnetic levitation. Superconductors also exhibit a separate inherently quantum mechanical phenomenon called the Meissner effect in which any magnetic field lines present in the material when it becomes superconducting are expelled, thus the magnetic field in a superconductor is always zero.
Using electromagnets with electronic switching comparable to electronic speed control it is possible to generate electromagnetic fields moving in an arbitrary direction. As described in the section above about eddy current brakes, a non-ferromagnetic conductor surface tends to rest within this moving field. When however this field is moving, a vehicle can be levitated and propelled. This is comparable to a maglev but is not bound to a rail.
Identification of metals
In some coin-operated vending machines, eddy currents are used to detect counterfeit coins, or slugs. The coin rolls past a stationary magnet, and eddy currents slow its speed. The strength of the eddy currents, and thus the retardation, depends on the conductivity of the coin's metal. Slugs are slowed to a different degree than genuine coins, and this is used to send them into the rejection slot.
Vibration and position sensing
Eddy currents are used in certain types of proximity sensors to observe the vibration and position of rotating shafts within their bearings. This technology was originally pioneered in the 1930s by researchers at General Electric using vacuum tube circuitry. In the late 1950s, solid-state versions were developed by Donald E. Bently at Bently Nevada Corporation. These sensors are extremely sensitive to very small displacements making them well suited to observe the minute vibrations (on the order of several thousandths of an inch) in modern turbomachinery. A typical proximity sensor used for vibration monitoring has a scale factor of 200 mV/mil. Widespread use of such sensors in turbomachinery has led to development of industry standards that prescribe their use and application. Examples of such standards are American Petroleum Institute (API) Standard 670 and ISO 7919.
A Ferraris acceleration sensor, also called a Ferraris sensor, is a contactless sensor that uses eddy currents to measure relative acceleration.
Structural testing
Eddy current techniques are commonly used for the nondestructive examination (NDE) and condition monitoring of a large variety of metallic structures, including heat exchanger tubes, aircraft fuselage, and aircraft structural components.
Skin effects
Eddy currents are the root cause of the skin effect in conductors carrying alternating current.
Similarly, in magnetic materials of finite conductivity, eddy currents cause the confinement of the majority of the magnetic fields to only a couple skin depths of the surface of the material. This effect limits the flux linkage in inductors and transformers having magnetic cores.
Other applications
Rock climbing auto belays
Zip line brakes
Free fall devices
Metal detectors
Conductivity meters for non-magnetic metals
Eddy current adjustable-speed drives
Eddy-current testing
Eddy current brake
Electricity meters (electromechanical induction meters)
Induction heating
Cooking (induction cooking)
Proximity sensor (displacement sensors)
Vending machines (detection of coins)
Coating thickness measurements
Sheet resistance measurement
Eddy current separator for metal separation
Mechanical speedometers
Safety hazard and defect detection applications
Magnetic damping
| Physical sciences | Electrodynamics | null |
6193459 | https://en.wikipedia.org/wiki/Boninite | Boninite | Boninite is an extrusive rock high in both magnesium and silica, thought to be usually formed in fore-arc environments, typically during the early stages of subduction. The rock is named for its occurrence in the Izu-Bonin arc south of Japan. It is characterized by extreme depletion in incompatible trace elements that are not fluid mobile (e.g., the heavy rare-earth elements plus Nb, Ta, Hf) but variable enrichment in the fluid mobile elements (e.g., Rb, Ba, K). They are found almost exclusively in the fore-arc of primitive island arcs (that is, closer to the ocean trench) and in ophiolite complexes thought to represent former fore-arc settings or at least formed above a subduction zone.
Boninite is considered to be a primitive andesite derived from melting of metasomatised mantle.
Similar Archean intrusive rocks, called sanukitoids, have been reported in the rocks of several early cratons. Archean boninite lavas are also reported.
Petrology
Boninite typically consists of phenocrysts of pyroxenes and olivine in a crystallite-rich glassy matrix.
Geochemistry
Boninite is defined by
high magnesium content (MgO = >8%)
low titanium (TiO2 < 0.5%)
silica content is 52–63%
high Mg/(Mg + Fe) (0.55–0.83)
Mantle-normal compatible elements Ni = 70–450 parts per million, Cr = 200–1800 ppm
Ba, Sr, LREE enrichments compared to tholeiite
Characteristic Ti/Zr ratios (23–63) and La/Yb ratios (0.6–4.7)
Genesis
Most boninite magma is formed by second stage melting in forearcs via hydration of previously depleted mantle within the mantle wedge above a subducted slab, causing further melting of the already depleted peridotite. A forearc environment is ideal for boninite genesis, but other tectonic environments, such as backarcs, might be able to form boninite. The content of titanium (an incompatible element within melting of peridotite) is extremely low because previous melting events had removed most of the incompatible elements from the residual mantle source. The first stage melting typically forms island arc basalt. The second melting event is partly made possible by hydrous fluids being added to the shallow hot depleted mantle, leading the enrichment in large ion lithophile elements in the boninite.
Boninite attains its high magnesium and very low titanium content via high degrees of partial melting within the convecting mantle wedge. The high degrees of partial melting are caused by the high water content of the mantle. With the addition of slab-derived volatiles, and incompatible elements derived from the release of low-volume partial melts of the subducted slab, the depleted mantle in the mantle wedge undergoes melting.
Evidence for variable enrichment or depletion of incompatible elements suggests that boninites are derived from refractory peridotite which has been metasomatically enriched in LREE, strontium, barium, and alkalis. Enrichment in Ba, Sr and alkalis may result from a component derived from subducted oceanic crust. This is envisaged as contamination from the underlying subducted slab, either as a sedimentary source or as melts derived from the dehydrating slab.
Boninites can be derived from the peridotite residue of earlier arc tholeiite generation which is metasomatically enriched in LREE before boninite volcanism, or arc tholeiites and boninites can be derived from a variably depleted peridotite source which has been variably metasomatised in LREE.
Areas of fertile peridotite would yield tholeiites, and refractory areas would yield boninites.
Examples
| Physical sciences | Igneous rocks | Earth science |
6194438 | https://en.wikipedia.org/wiki/U-shaped%20valley | U-shaped valley | U-shaped valleys, also called trough valleys or glacial troughs, are formed by the process of glaciation. They are characteristic of mountain glaciation in particular. They have a characteristic U shape in cross-section, with steep, straight sides and a flat or rounded bottom (by contrast, valleys carved by rivers tend to be V-shaped in cross-section). Glaciated valleys are formed when a glacier travels across and down a slope, carving the valley by the action of scouring. When the ice recedes or thaws, the valley remains, often littered with small boulders that were transported within the ice, called glacial till or glacial erratic.
Examples of U-shaped valleys are found in mountainous regions throughout the world including the Andes, Alps, Caucasus Mountains, Himalaya, Rocky Mountains, New Zealand and the Scandinavian Mountains. They are found also in other major European mountains including the Carpathian Mountains, the Pyrenees, the Rila and Pirin mountains in Bulgaria, and the Scottish Highlands. A classic glacial trough is in Glacier National Park in Montana, USA in which the St. Mary River runs. Another well-known U-shaped valley is the Nant Ffrancon valley in Snowdonia, Wales.
When a U-shaped valley extends into saltwater, becoming an inlet of the sea, it is called a fjord, from the Norwegian word for these features that are common in Norway. Outside of Norway, a classic U-shaped valley that is also a fjord is the Western Brook Pond Fjord in Gros Morne National Park in Newfoundland, Canada.
Formation
Shape
Formation of a U-shaped valley happens over geologic time, meaning not during a human's lifespan. It can take anywhere between 10,000 and 100,000 years for a V-shaped valley to be carved into a U-shaped valley. These valleys can be several thousand feet deep and tens of miles long. Glaciers will spread out evenly in open areas, but tend to carve deep into the ground when confined to a valley. Ice thickness is a major contributing factor to valley depth and carving rates. As a glacier moves downhill through a valley, usually with a stream running through it, the shape of the valley is transformed. As the ice melts and retreats, the valley is left with very steep sides and a wide, flat floor. This parabolic shape is caused by glacial erosion removing the contact surfaces with greatest resistance to flow, and the resulting section minimises friction. There are two main variations of this U-shape. The first is called the Rocky Mountain model and it is attributed to alpine glacial valleys, showing an overall deepening effect on the valley. The second variation is referred to as the Patagonia-Antarctica model, attributed to continental ice sheets and displaying an overall widening effect on its surroundings.
Valley floor
The floors of these glacial valleys are where the most evidence can be found regarding glaciation cycles. For the most part, the valley floor is wide and flat, but there are various glacial features that signify periods of ice transgression and regression. The valley can have various steps, known as valley steps, and over-deepenings anywhere from ten to hundreds of meters deep. These then fill in with sediments to create plains or water to create lakes, sometimes referred to as "string-of-pearl" or ribbon lakes. Such water filled U-valley basins are also known as "fjord-lakes" or "valley-lakes" (Norwegian: fjordsjø or dalsjø). Gjende and Bandak lakes in Norway are examples of fjord-lakes. Some of these fjord-lakes are very deep for instance Mjøsa (453 meters) and Hornindalsvatnet (514 m). The longitudinal profile of a U-shaped, glaciated valley is often stepwise where flat basins are interrupted by thresholds. Rivers often dig a V-shaped valley or gorge through the threshold.
Surrounding smaller tributary valleys will often join the main valleys during glaciation periods, leaving behind features known as hanging valleys high in the trough walls after the ice melts.
After deglaciation, snow and ice melt from the mountain tops can create streams and rivers in U-shaped valleys. These are referred to as misfit streams. The streams that form in hanging valleys create waterfalls that flow into the main valley branch. Glacial valleys may also have natural, often dam-like, structures within them, called moraines. They are created due to excess sediment and glacial till moved and deposited by the glacier.
In volcanic mountain ranges, such as the Principal Cordillera of the Andes, glacial valley floors may be covered by thick lava flows.
Trough end
A glacial trough or glaciated mountain valley often ends in an abrupt head known as the 'trough end' or 'trough head'. This may have almost sheer rock walls and spectacular waterfalls. They are believed to have been formed where a number of small glaciers merge to produce a much larger glacier. Examples include: Warnscale Bottom in the Lake District, Yosemite Valley, and the Rottal and valleys in Switzerland.
Marine troughs
Glacial troughs also exist as submarine valleys on continental shelves, such as the Laurentian Channel. These geomorphic features significantly influence sediment distribution and biological communities through their modification of current patterns.
History
Geologists did not always believe that glaciers were responsible for U-shaped valleys and other glacial erosional features. Ice is quite soft and it was unbelievable to many that it could be responsible for the severe carving of bedrock characteristic of glacial erosion. German geologist Penck and American geologist Davis were vocal supporters of this unprecedented glacial erosion.
Progress was made in the 1970s and 1980s on the possible mechanisms of glacial erosion and U-shaped valleys via models proposed by various scientists. Numerical models have been created to explain the phenomenon of carving U-shaped valleys.
| Physical sciences | Glacial landforms | Earth science |
6196410 | https://en.wikipedia.org/wiki/Koel | Koel | The true koels, Eudynamys, are a genus of cuckoos from Asia, Australia and the Pacific. They are large sexually dimorphic cuckoos that eat fruits and insects and have loud distinctive calls. They are brood parasites, laying their eggs in the nests of other species.
Taxonomy
The genus Eudynamys was introduced in 1827 by the English naturalists Nicholas Vigors and Thomas Horsfield. The name combines the Ancient Greek eu meaning "fine" with dunamis meaning "power" or "strength". The type species was designated as the Pacific koel by George Robert Gray in 1840.
A molecular genetic study by Sorenson and Payne (2005) found that the closest relative of Eudynamys is the dwarf koel (Microdynamis parva), and beyond that the thick-billed cuckoo (Pachycoccyx audeberti). They found that the long-tailed cuckoo (Urodynamis taitensis) of New Zealand and the Pacific, which had earlier been placed in Eudynamys as E. taitensis and sometimes called the long-tailed koel, was more distantly related, along with other members of the tribe Cuculini, including the white-crowned cuckoo (Cacomantis leucolophus), also known as the white-crowned koel. However, not all the evidence for the relationships was very strong and further research was required.
Species
The taxonomy of the common koel complex is difficult and remains a matter of dispute. Some recognize only a single species (common koel, Eudynamys scolopaceus, with melanorhynchus and orientalis as subspecies); some recognize two species (common koel, Eudynamys scolopaceus, with orientalis as a subspecies, and black-billed koel, Eudynamys melanorhynchus); and others recignize three species. Common koel may therefore refer to:
Sexual dimorphism
The female koel's plumage is banded and speckled in shades of brown. The evolutionary function is to camouflage her approach to her host's nest and enable her brood parasitism to go undetected. Noisy miner and wattle birds have been observed feeding their fledglings. The male's sexually dimorphic plumage is black, like a raven. They are of a similar size to ravens and are known to have territories that overlap with ravens. They have also been observed being mobbed by noisy miners and wattle birds in the same way as ravens (egg predators) are. The male koel may be a raven mimic enabling the female to approach the host's nest, either deliberately or opportunistically, while the host flock is engaged in (distracted) mobbing the male.
| Biology and health sciences | Cuculiformes and relatives | Animals |
1235913 | https://en.wikipedia.org/wiki/Radiative%20zone | Radiative zone | A radiative zone is a layer of a star's interior where energy is primarily transported toward the exterior by means of radiative diffusion and thermal conduction, rather than by convection. Energy travels through the radiative zone in the form of electromagnetic radiation as photons.
Matter in a radiative zone is so dense that photons can travel only a short distance before they are absorbed or scattered by another particle, gradually shifting to longer wavelength as they do so. For this reason, it takes an average of 171,000 years for gamma rays from the core of the Sun to leave the radiative zone. Over this range, the temperature of the plasma drops from 15 million K near the core down to 1.5 million K at the base of the convection zone.
Temperature gradient
In a radiative zone, the temperature gradient—the change in temperature (T) as a function of radius (r)—is given by:
where κ(r) is the opacity, ρ(r) is the matter density, L(r) is the luminosity, and σB is the Stefan–Boltzmann constant. Hence the opacity (κ) and radiation flux (L) within a given layer of a star are important factors in determining how effective radiative diffusion is at transporting energy. A high opacity or high luminosity can cause a high temperature gradient, which results from a slow flow of energy. Those layers where convection is more effective than radiative diffusion at transporting energy, thereby creating a lower temperature gradient, will become convection zones.
This relation can be derived by integrating Fick's first law over the surface of some radius r, giving the total outgoing energy flux which is equal to the luminosity by conservation of energy:
Where D is the photons diffusion coefficient, and u is the energy density.
The energy density is related to the temperature by Stefan–Boltzmann law by:
Finally, as in the elementary theory of diffusion coefficient in gases, the diffusion coefficient D approximately satisfies:
where λ is the photon mean free path, and is the reciprocal of the opacity κ.
Eddington stellar model
Eddington assumed the pressure P in a star is a combination of an ideal gas pressure and radiation pressure, and that there is a constant ratio, β, of the gas pressure to the total pressure.
Therefore, by the ideal gas law:
where kB is Boltzmann constant and μ the mass of a single atom (actually, an ion since matter is ionized; usually a hydrogen ion, i.e. a proton).
While the radiation pressure satisfies:
so that T4 is proportional to P throughout the star.
This gives the polytropic equation (with n=3):
Using the hydrostatic equilibrium equation, the second equation becomes equivalent to:
For energy transmission by radiation only, we may use the equation for the temperature gradient (presented in the previous subsection) for the right-hand side and get
Thus the Eddington model is a good approximation in the radiative zone as long as κL/M is approximately constant, which is often the case.
Stability against convection
The radiation zone is stable against formation of convection cells if the density gradient is high enough, so that an element moving upwards has its density lowered (due to adiabatic expansion) less than the drop in density of its surrounding, so that it will experience a net buoyancy force downwards.
The criterion for this is:
where P is the pressure, ρ the density and is the heat capacity ratio.
For a homogenic ideal gas, this is equivalent to:
We can calculate the left-hand side by dividing the equation for the temperature gradient by the equation relating the pressure gradient to the gravity acceleration g:
M(r) being the mass within the sphere of radius r, and is approximately the whole star mass for large enough r.
This gives the following form of the Schwarzschild criterion for stability against convection:
Note that for non-homogenic gas this criterion should be replaced by the Ledoux criterion, because the density gradient now also depends on concentration gradients.
For a polytrope solution with n=3 (as in the Eddington stellar model for radiative zone), P is proportional to T4 and the left-hand side is constant and equals 1/4, smaller than the ideal monatomic gas approximation for the right-hand side giving . This explains the stability of the radiative zone against convection.
However, at a large enough radius, the opacity κ increases due to the decrease in temperature (by Kramers' opacity law), and possibly also due to a smaller degree of ionization in the lower shells of heavy elements ions. This leads to a violation of the stability criterion and to the creation of the convection zone; in the sun, opacity increases by more than a tenfold across the radiative zone, before the transition to the convection zone happens.
Additional situations in which this stability criterion is not met are:
Large values of , which may happen towards the star core's center, where M(r) is small, if nuclear energy production is strongly peaked at the center, as in relatively massive stars. Thus such stars have a convective core.
A smaller value of . For semi-ionized gas, where approximately half of the atoms are ionized, the effective value of drops to 6/5, giving . Therefore, all stars have shallow convection zones near their surfaces, at low enough temperatures where ionization is only partial.
Main sequence stars
For main sequence stars—those stars that are generating energy through the thermonuclear fusion of hydrogen at the core, the presence and location of radiative regions depends on the star's mass. Main sequence stars below about 0.3 solar masses are entirely convective, meaning they do not have a radiative zone. From 0.3 to 1.2 solar masses, the region around the stellar core is a radiative zone, separated from the overlying convection zone by the tachocline. The radius of the radiative zone increases monotonically with mass, with stars around 1.2 solar masses being almost entirely radiative. Above 1.2 solar masses, the core region becomes a convection zone and the overlying region is a radiative zone, with the amount of mass within the convective zone increasing with the mass of the star.
The Sun
In the Sun, the region between the solar core at 0.2 of the Sun's radius and the outer convection zone at 0.71 of the Sun's radius is referred to as the radiation zone, although the core is also a radiative region. The convection zone and the radiative zone are divided by the tachocline, another part of the Sun.
| Physical sciences | Stellar astronomy | Astronomy |
1235959 | https://en.wikipedia.org/wiki/Convection%20zone | Convection zone | A convection zone, convective zone or convective region of a star is a layer which is unstable due to convection. Energy is primarily or partially transported by convection in such a region. In a radiation zone, energy is transported by radiation and conduction.
Stellar convection consists of mass movement of plasma within the star which usually forms a circular convection current with the heated plasma ascending and the cooled plasma descending.
The Schwarzschild criterion expresses the conditions under which a region of a star is unstable to convection. A parcel of gas that rises slightly will find itself in an environment of lower pressure than the one it came from. As a result, the parcel will expand and cool. If the rising parcel cools to a lower temperature than its new surroundings, so that it has a higher density than the surrounding gas, then its lack of buoyancy will cause it to sink back to where it came from. However, if the temperature gradient is steep enough (i.e. the temperature changes rapidly with distance from the center of the star), or if the gas has a very high heat capacity (i.e. its temperature changes relatively slowly as it expands) then the rising parcel of gas will remain warmer and less dense than its new surroundings even after expanding and cooling. Its buoyancy will then cause it to continue to rise. The region of the star in which this happens is the convection zone.
Main sequence stars
In main sequence stars more than 1.3 times the mass of the Sun, the high core temperature causes nuclear fusion of hydrogen into helium to occur predominantly via the carbon-nitrogen-oxygen (CNO) cycle instead of the less temperature-sensitive proton–proton chain. The high temperature gradient in the core region forms a convection zone that slowly mixes the hydrogen fuel with the helium product. The core convection zone of these stars is overlaid by a radiation zone that is in thermal equilibrium and undergoes little or no mixing. In the most massive stars, the convection zone may reach all the way from the core to the surface.
In main sequence stars of less than about 1.3 solar masses, the outer envelope of the star contains a region where partial ionization of hydrogen and helium raises the heat capacity. The relatively low temperature in this region simultaneously causes the opacity due to heavier elements to be high enough to produce a steep temperature gradient. This combination of circumstances produces an outer convection zone, the top of which is visible in the Sun as solar granulation. Low-mass main-sequence stars, such as red dwarfs below 0.35 solar masses, as well as pre-main sequence stars on the Hayashi track, are convective throughout and do not contain a radiation zone.
In main sequence stars similar to the Sun, which have a radiative core and convective envelope, the transition region between the convection zone and the radiation zone is called the tachocline.
Red giants
In red giant stars, and particularly during the asymptotic giant branch phase, the surface convection zone varies in depth during the phases of shell burning. This causes dredge-up events, short-lived very deep convection zones that transport fusion products to the surface of the star.
| Physical sciences | Stellar astronomy | Astronomy |
1236041 | https://en.wikipedia.org/wiki/Cucumis%20metuliferus | Cucumis metuliferus | Cucumis metuliferus commonly called the African horned cucumber (shortened to horned cucumber), horned melon, spiked melon, jelly melon, or kiwano, is an annual vine in the cucumber and melon family Cucurbitaceae. Its fruit has horn-like spines, hence the name "horned melon". The ripe fruit has orange skin and lime-green, jelly-like flesh. C. metuliferus is native to Southern Africa, in South Africa, Namibia, Botswana, Zambia, Malawi, Zimbabwe, Mozambique, and Angola.
Consumption and other uses
Kiwano is a traditional food plant in Africa. Along with the gemsbok cucumber (Acanthosicyos naudinianus) and tsamma (citron melon), it is one of the few sources of water during the dry season in the Kalahari Desert. In northern Zimbabwe, it is called gaka or gakachika, and is primarily used as a snack or salad, and rarely for decoration. It can be eaten at any stage of ripening.
C. metuliferus may be used as a rootstock (via grafting) for melon to prevent both growth reduction and a strong nematode buildup in M. incognita-infested soil.
The fruit's taste has been compared to a combination of banana and passionfruit, cucumber and zucchini or a combination of banana, cucumber and lime. A small amount of salt or sugar can increase the flavor, but the seed content can make eating the fruit less convenient than many common fruits.
Some also eat the peel, which is very rich in vitamin C and dietary fiber.
Germination
Germination is optimal between 20 and 35°C. Germination is delayed at 12°C, totally inhibits at seed stratification, and greatly inhibits above 35°C. Salinity increases the time required for full germination. The sowing dates greatly influence fruit yield and flowering.
Pests and diseases
Kiwano is resistant to several root-knot nematodes; two accessions were found to be highly resistant to watermelon mosaic virus, but very sensitive to the squash mosaic virus. Some accessions were found to succumb to Fusarium wilt. Resistance to greenhouse whitefly was reported. Kiwano was reported to be resistant to powdery mildew, but in Israel, powdery mildew and squash mosaic virus attacked kiwano fields and control measures had to be taken.
Fruit Development
During 28 days of development on the plant, fresh weight, electrical conductivity and titratable acidity of fruits do not change, pH rises and then falls, and concentrations of reducing sugars and total soluble solids increases. In the same period, peel colour changes from green through whitish green to yellow and finally to orange, and the pigment profile shows a decline in pigments absorbing at 431 and 663 nm, and a rapid increase in those absorbing at 442 and 470 nm.
Gallery
| Biology and health sciences | Melons | Plants |
1236266 | https://en.wikipedia.org/wiki/Muridae | Muridae | The Muridae, or murids, are either the largest or second-largest family of rodents and of mammals, containing approximately 870 species, including many species of mice, rats, and gerbils found naturally throughout Eurasia, Africa, and Australia.
The name Muridae comes from the Latin (genitive ), meaning "mouse", since all true mice belong to the family, with the more typical mice belonging to the genus Mus.
Distribution and habitat
Murids are found nearly everywhere in the world, though many subfamilies have narrower ranges. Murids are not found in Antarctica or many oceanic islands. Although none of them are native to the Americas, a few species, notably the house mouse and black rat, have been introduced worldwide. Murids occupy a broad range of ecosystems from tropical forests to tundras. Fossorial, arboreal, and semiaquatic murid species occur, though most are terrestrial animals. The extensive list of niches filled by murids helps to explain their relative abundance.
Diet and dentition
A broad range of feeding habits is found in murids, ranging from herbivorous and omnivorous species to specialists that consume strictly earthworms, certain species of fungi, or aquatic insects. Most genera consume plant matter and small invertebrates, often storing seeds and other plant matter for winter consumption. Murids have sciurognathous jaws (an ancestral character in rodents) and a diastema is present. Murids lack canines and premolars. Generally, three molars (though sometimes only one or two) are found, and the nature of the molars varies by genus and feeding habit.
Reproduction
Some murids are highly social, while others are solitary. Females commonly produce several litters annually. In warm regions, breeding may occur year-round. Though the lifespans of most genera are generally less than two years, murids have high reproductive potential and their populations tend to increase rapidly and then drastically decline when food resources have been exhausted. This is often seen in a three- to four-year cycle.
Characteristics
The murids are small mammals, typically around long excluding the tail, but ranging from in the African pygmy mouse to in the northern Luzon giant cloud rat. They typically have slender bodies with scaled tails longer than the body, and pointed snouts with prominent whiskers, but with wide variation in these broad traits. Some murids have elongated legs and feet to allow them to move with a hopping motion, while others have broad feet and prehensile tails to improve their climbing ability, and yet others have neither adaptation. They are most commonly some shade of brown in color, although many have black, grey, or white markings.
Murids generally have excellent senses of hearing and smell. They live in a wide range of habitats from forest to grassland, and mountain ranges. A number of species, especially the gerbils, are adapted to desert conditions and can survive for a long time with minimal water. They consume a wide range of foods depending on the species, with the aid of powerful jaw muscles and gnawing incisors that grow throughout life. The dental formula of murids is .
Murids breed frequently, often producing large litters several times per year. They typically give birth between twenty and forty days after mating, although this varies greatly between species. The young are typically born blind, hairless, and helpless, although exceptions occur, such as in spiny mice.
Evolution
As with many other small mammals, the evolution of the murids is not well known, as few fossils survive. They probably evolved from hamster-like animals in tropical Asia some time in the early Miocene, and have only subsequently produced species capable of surviving in cooler climates. They have become especially common worldwide during the current geological epoch, as a result of hitching a ride commensally with human migrations.
Classification
The murids are classified in five subfamilies, around 150 genera, and about 834 species.
Subfamilies
Deomyinae (spiny mice, brush furred mice, link rat)
Gerbillinae (gerbils, jirds and sand rats)
Leimacomyinae (Togo mouse)
Lophiomyinae (maned rat or crested rat)
Murinae (Old World rats and mice, including vlei rats)
In literature
Murids feature in literature, including folk tales and fairy stories. In the Pied Piper of Hamelin, retold in many versions since the 14th century, including one by the Brothers Grimm, a rat-catcher lures the town's rats into the river, but the mayor refuses to pay him. In revenge, the rat-catcher lures away all the children of the town, never to return. Mice feature in some of Beatrix Potter's small books, including The Tale of Two Bad Mice (1904), The Tale of Mrs Tittlemouse (1910), The Tale of Johnny Town-Mouse (1918), and The Tailor of Gloucester (1903), which last was described by J. R. R. Tolkien as perhaps the nearest to his idea of a fairy story, the rest being "beast-fables". Among Aesop's Fables are The Cat and the Mice and The Frog and the Mouse. In James Herbert's first novel, The Rats, (1974), a vagrant is attacked and eaten alive by a pack of giant rats; further attacks follow.
| Biology and health sciences | Rodents | null |
1236472 | https://en.wikipedia.org/wiki/Chlorite%20group | Chlorite group | The chlorites are the group of phyllosilicate minerals common in low-grade metamorphic rocks and in altered igneous rocks. Greenschist, formed by metamorphism of basalt or other low-silica volcanic rock, typically contains significant amounts of chlorite.
Chlorite minerals show a wide variety of compositions, in which magnesium, iron, aluminium, and silicon substitute for each other in the crystal structure. A complete solid solution series exists between the two most common end members, magnesium-rich clinochlore and iron-rich chamosite. In addition, manganese, zinc, lithium, and calcium species are known. The great range in composition results in considerable variation in physical, optical, and X-ray properties. Similarly, the range of chemical composition allows chlorite group minerals to exist over a wide range of temperature and pressure conditions. For this reason chlorite minerals are ubiquitous minerals within low and medium temperature metamorphic rocks, some igneous rocks, hydrothermal rocks and deeply buried sediments.
The name chlorite is from the Greek chloros (χλωρός), meaning "green", in reference to its color. Chlorite minerals do not contain the element chlorine, also named from the same Greek root.
Properties
Chlorite forms blue-green crystals resembling mica. However, while the plates are flexible, they are not elastic like mica, and are less easily pulled apart. Talc is much softer and feels soapy between the fingers.
The typical general formula for chlorite is . This formula emphasizes the structure of the group, which is described as TOT-O and consists of alternating TOT layers and O layers. The TOT layer (Tetrahedral-Octahedral-Tetrahedral = T-O-T) is often referred to as a talc layer, since talc is composed entirely of stacked TOT layers. The TOT layers of talc are electrically neutral and are bound only by relatively weak van der Waals forces. By contrast, the TOT layers of chlorite contain some aluminium in place of silicon, which gives the layers an overall negative charge. These TOT layers are bound together by positively charged O layers, sometimes called brucite layers. Mica is also composed of aluminium-rich, negatively charged TOT layers, but these are bonded together by individual cations (such as potassium, sodium, or calcium ions) rather than a positively charged brucite layer.
Chlorite is considered a clay mineral. It is a nonswelling clay mineral, since water is not adsorbed in the interlayer spaces, and it has a relatively low cation exchange capacity.
Occurrence
Chlorite is a common mineral, found in metamorphic, igneous, and sedimentary rocks. It is an important rock-forming mineral in low- to medium-grade metamorphic rock formed by metamorphism of mafic or pelitic rock. It is also common in igneous rocks, usually as a secondary mineral, formed by alteration of mafic minerals such as biotite, hornblende, pyroxene, or garnet. The glassy rims of pillow basalt on the ocean floor is often altered to pure chlorite, in part by exchange of chemicals with seawater. The green color of many igneous rocks, slates, and schists is due to fine particles of chlorite disseminated throughout the rock. Chlorite is a common weathering product and is widespread in clay and in sedimentary rock containing clay minerals. Chlorite is found in pelites along with quartz, albite, sericite, and garnet, and is also found in associate with actinolite and epidote.
In his pioneering work on metamorphic facies in the Scottish Highlands, G.M. Barrow identified the chlorite zone as the zone of mildest metamorphism. In modern petrology, chlorite is the diagnostic mineral of the greenschist facies. This facies is characterized by temperatures near and pressures near 5 kbar. At higher temperatures, much of the chlorite is destroyed by reactions with either potassium feldspar or phengite mica which produce biotite, muscovite, and quartz. At still higher temperatures, other reactions destroy the remaining chlorite, often with release of water vapor.
Chlorite is one of the most common minerals produced by propylitic alteration by hydrothermal systems, where it occurs in the "green rock" environment with epidote, actinolite, albite, hematite, and calcite.
Experiments indicate that chlorite can be stable in peridotite of the Earth's mantle above the ocean lithosphere carried down by subduction, and chlorite may even be present in the mantle volume from which island arc magmas are generated.
Members of the chlorite group
{| class="wikitable"
|-
! Baileychlore
! IMA1986-056
|
|-
! Borocookeite
! IMA2000-013
|
|-
! Chamosite
! year: 1820
|
|-
! Clinochlore
! year: 1851
|
|-
! Cookeite
! year: 1866
|
|-
! Donbassite
! year: 1940
|
|-
! Gonyerite
! year: 1955
|
|-
! Nimite
! year: 1968
|
|-
! Pennantite
! year: 1946
|
|-
! Ripidolite
! chlinochlore var.
|
|-
! Sudoite
! IMA1966-027
|
|}
Clinochlore, pennantite, and chamosite are the most common varieties. Several other sub-varieties have been described. A massive compact variety of clinochlore used as a decorative carving stone is referred to by the trade name seraphinite. It occurs in the Korshunovskoye iron skarn deposit in the Irkutsk Oblast of Eastern Siberia.
Uses
Chlorite does not have any specific industrial uses of any importance. Some rock types containing chlorite, such as chlorite schist, have minor decorative uses or as construction stone. However, chlorite is a common mineral in clay, which has a vast number of uses.
Chlorite schist has been used as roofing granules, the mineral granules adhered to asphalt composition shingles due to the green color. It was quarried near Ely, Minnesota, US, until superseded by synthetic materials.
| Physical sciences | Silicate minerals | Earth science |
1236730 | https://en.wikipedia.org/wiki/Neoplasm | Neoplasm | A neoplasm () is a type of abnormal and excessive growth of tissue. The process that occurs to form or produce a neoplasm is called neoplasia. The growth of a neoplasm is uncoordinated with that of the normal surrounding tissue, and persists in growing abnormally, even if the original trigger is removed. This abnormal growth usually forms a mass, which may be called a tumour or tumor.ICD-10 classifies neoplasms into four main groups: benign neoplasms, in situ neoplasms, malignant neoplasms, and neoplasms of uncertain or unknown behavior. Malignant neoplasms are also simply known as cancers and are the focus of oncology.
Prior to the abnormal growth of tissue, such as neoplasia, cells often undergo an abnormal pattern of growth, such as metaplasia or dysplasia. However, metaplasia or dysplasia does not always progress to neoplasia and can occur in other conditions as well. The word neoplasm is from Ancient Greek 'new' and 'formation, creation'.
Types
A neoplasm can be benign, potentially malignant, or malignant (cancer).
Benign tumors include uterine fibroids, osteophytes, and melanocytic nevi (skin moles). They are circumscribed and localized and do not transform into cancer.
Potentially-malignant neoplasms include carcinoma in situ. They are localised, and do not invade and destroy but in time, may transform into cancer.
Malignant neoplasms are commonly called cancer. They invade and destroy the surrounding tissue, may form metastases and, if untreated or unresponsive to treatment, will generally prove fatal.
Secondary neoplasm refers to any of a class of cancerous tumor that is either a metastatic offshoot of a primary tumor, or an apparently unrelated tumor that increases in frequency following certain cancer treatments such as chemotherapy or radiotherapy.
Rarely there can be a metastatic neoplasm with no known site of the primary cancer and this is classed as a cancer of unknown primary origin.
Clonality
Neoplastic tumors are often heterogeneous and contain more than one type of cell, but their initiation and continued growth are usually dependent on a single population of neoplastic cells. These cells are presumed to be monoclonal – that is, they are derived from the same cell, and all carry the same genetic or epigenetic anomaly – evident of clonality. For lymphoid neoplasms, e.g. lymphoma and leukemia, clonality is proven by the amplification of a single rearrangement of their immunoglobulin gene (for B cell lesions) or T cell receptor gene (for T cell lesions). The demonstration of clonality is now considered to be necessary to identify a lymphoid cell proliferation as neoplastic.
Neoplasm vs. tumor
The word tumor or tumour comes from the Latin word for swelling, which is one of the cardinal signs of inflammation. The word originally referred to any form of swelling, neoplastic or not. In modern English, tumor (non-US spelling: tumour) is used as a synonym for a neoplasm (a solid or fluid-filled cystic lesion that may or may not be formed by an abnormal growth of neoplastic cells) that appears enlarged in size. Some neoplasms do not form a tumor; these include leukemia and most forms of carcinoma in situ. Tumor is also not synonymous with cancer. While cancer is by definition malignant, a tumor can be benign, precancerous, or malignant.
The terms mass and nodule are often used synonymously with tumor. Generally speaking, however, the term tumor is used generically, without reference to the physical size of the lesion. More specifically, the term mass is often used when the lesion has a maximal diameter of at least 20 millimeters (mm) in the greatest direction, while the term nodule is usually used when the size of the lesion is less than 20 mm in its greatest dimension (25.4 mm = 1 inch).
Causes
Tumors in humans occur as a result of accumulated genetic and epigenetic alterations within single cells, which cause the cell to divide and expand uncontrollably. A neoplasm can be caused by an abnormal proliferation of tissues, which can be caused by genetic mutations. Not all types of neoplasms cause a tumorous overgrowth of tissue (such as leukemia or carcinoma in situ), however similarities between neoplasmic growths and regenerative processes, e.g., dedifferentiation and rapid cell proliferation, have been pointed out.
Tumor growth has been studied using mathematics and continuum mechanics. Vascular tumors such as hemangiomas and lymphangiomas (formed from blood or lymph vessels) are thus looked at as being amalgams of a solid skeleton formed by sticky cells and an organic liquid filling the spaces in which cells can grow. Under this type of model, mechanical stresses and strains can be dealt with and their influence on the growth of the tumor and the surrounding tissue and vasculature elucidated. Recent findings from experiments that use this model show that active growth of the tumor is restricted to the outer edges of the tumor and that stiffening of the underlying normal tissue inhibits tumor growth as well.
Benign conditions that are not associated with an abnormal proliferation of tissue (such as sebaceous cysts) can also present as tumors, however, but have no malignant potential. Breast cysts (as occur commonly during pregnancy and at other times) are another example, as are other encapsulated glandular swellings (thyroid, adrenal gland, pancreas).
Encapsulated hematomas, encapsulated necrotic tissue (from an insect bite, foreign body, or other noxious mechanism), keloids (discrete overgrowths of scar tissue) and granulomas may also present as tumors.
Discrete localized enlargements of normal structures (ureters, blood vessels, intrahepatic or extrahepatic biliary ducts, pulmonary inclusions, or gastrointestinal duplications) due to outflow obstructions or narrowings, or abnormal connections, may also present as a tumor. Examples are arteriovenous fistulae or aneurysms (with or without thrombosis), biliary fistulae or aneurysms, sclerosing cholangitis, cysticercosis or hydatid cysts, intestinal duplications, and pulmonary inclusions as seen with cystic fibrosis. It can be dangerous to biopsy a number of types of tumor in which the leakage of their contents would potentially be catastrophic. When such types of tumors are encountered, diagnostic modalities such as ultrasound, CT scans, MRI, angiograms, and nuclear medicine scans are employed prior to (or during) biopsy or surgical exploration/excision in an attempt to avoid such severe complications.
Malignant neoplasms
DNA damage
DNA damage is considered to be the primary underlying cause of malignant neoplasms known as cancers. Its central role in progression to cancer is illustrated in the figure in this section, in the box near the top. (The central features of DNA damage, epigenetic alterations and deficient DNA repair in progression to cancer are shown in red.) DNA damage is very common. Naturally occurring DNA damages (mostly due to cellular metabolism and the properties of DNA in water at body temperatures) occur at a rate of more than 10,000 new damages, on average, per human cell, per day. Additional DNA damages can arise from exposure to exogenous agents. Tobacco smoke causes increased exogenous DNA damage, and these DNA damages are the likely cause of lung cancer due to smoking. UV light from solar radiation causes DNA damage that is important in melanoma. Helicobacter pylori infection produces high levels of reactive oxygen species that damage DNA and contributes to gastric cancer. Bile acids, at high levels in the colons of humans eating a high fat diet, also cause DNA damage and contribute to colon cancer. Katsurano et al. indicated that macrophages and neutrophils in an inflamed colonic epithelium are the source of reactive oxygen species causing the DNA damages that initiate colonic tumorigenesis (creation of tumors in the colon). Some sources of DNA damage are indicated in the boxes at the top of the figure in this section.
Individuals with a germline mutation causing deficiency in any of 34 DNA repair genes (see article DNA repair-deficiency disorder) are at increased risk of cancer. Some germline mutations in DNA repair genes cause up to 100% lifetime chance of cancer (e.g., p53 mutations). These germline mutations are indicated in a box at the left of the figure with an arrow indicating their contribution to DNA repair deficiency.
About 70% of malignant (cancerous) neoplasms have no hereditary component and are called "sporadic cancers". Only a minority of sporadic cancers have a deficiency in DNA repair due to mutation in a DNA repair gene. However, a majority of sporadic cancers have deficiency in DNA repair due to epigenetic alterations that reduce or silence DNA repair gene expression. For example, of 113 sequential colorectal cancers, only four had a missense mutation in the DNA repair gene MGMT, while the majority had reduced MGMT expression due to methylation of the MGMT promoter region (an epigenetic alteration). Five reports present evidence that between 40% and 90% of colorectal cancers have reduced MGMT expression due to methylation of the MGMT promoter region.
Similarly, out of 119 cases of mismatch repair-deficient colorectal cancers that lacked DNA repair gene PMS2 expression, PMS2 was deficient in 6 due to mutations in the PMS2 gene, while in 103 cases PMS2 expression was deficient because its pairing partner MLH1 was repressed due to promoter methylation (PMS2 protein is unstable in the absence of MLH1). In the other 10 cases, loss of PMS2 expression was likely due to epigenetic overexpression of the microRNA, miR-155, which down-regulates MLH1.
In further examples, epigenetic defects were found at frequencies of between 13%-100% for the DNA repair genes BRCA1, WRN, FANCB, FANCF, MGMT, MLH1, MSH2, MSH4, ERCC1, XPF, NEIL1 and ATM. These epigenetic defects occurred in various cancers, including breast, ovarian, colorectal, and head and neck cancers. Two or three deficiencies in expression of ERCC1, XPF or PMS2 occur simultaneously in the majority of the 49 colon cancers evaluated by Facista et al. Epigenetic alterations causing reduced expression of DNA repair genes is shown in a central box at the third level from the top of the figure in this section, and the consequent DNA repair deficiency is shown at the fourth level.
When expression of DNA repair genes is reduced, DNA damages accumulate in cells at a higher than normal level, and these excess damages cause increased frequencies of mutation or epimutation. Mutation rates strongly increase in cells defective in DNA mismatch repair or in homologous recombinational repair (HRR).
During repair of DNA double strand breaks, or repair of other DNA damages, incompletely cleared sites of repair can cause epigenetic gene silencing. DNA repair deficiencies (level 4 in the figure) cause increased DNA damages (level 5 in the figure) which result in increased somatic mutations and epigenetic alterations (level 6 in the figure).
Field defects, normal-appearing tissue with multiple alterations (and discussed in the section below), are common precursors to development of the disordered and improperly proliferating clone of tissue in a malignant neoplasm. Such field defects (second level from bottom of figure) may have multiple mutations and epigenetic alterations.
Once a cancer is formed, it usually has genome instability. This instability is likely due to reduced DNA repair or excessive DNA damage. Because of such instability, the cancer continues to evolve and to produce sub clones. For example, a renal cancer, sampled in 9 areas, had 40 ubiquitous mutations, demonstrating tumor heterogeneity (i.e. present in all areas of the cancer), 59 mutations shared by some (but not all areas), and 29 "private" mutations only present in one of the areas of the cancer.
Field defects
Various other terms have been used to describe this phenomenon, including "field effect", "field cancerization", and "field carcinogenesis". The term "field cancerization" was first used in 1953 to describe an area or "field" of epithelium that has been preconditioned by (at that time) largely unknown processes so as to predispose it towards development of cancer. Since then, the terms "field cancerization" and "field defect" have been used to describe pre-malignant tissue in which new cancers are likely to arise.
Field defects are important in progression to cancer. However, in most cancer research, as pointed out by Rubin "The vast majority of studies in cancer research has been done on well-defined tumors in vivo, or on discrete neoplastic foci in vitro. Yet there is evidence that more than 80% of the somatic mutations found in mutator phenotype human colorectal tumors occur before the onset of terminal clonal expansion. Similarly, Vogelstein et al. point out that more than half of somatic mutations identified in tumors occurred in a pre-neoplastic phase (in a field defect), during growth of apparently normal cells. Likewise, epigenetic alterations present in tumors may have occurred in pre-neoplastic field defects.
An expanded view of field effect has been termed "etiologic field effect", which encompasses not only molecular and pathologic changes in pre-neoplastic cells but also influences of exogenous environmental factors and molecular changes in the local microenvironment on neoplastic evolution from tumor initiation to patient death.
In the colon, a field defect probably arises by natural selection of a mutant or epigenetically altered cell among the stem cells at the base of one of the intestinal crypts on the inside surface of the colon. A mutant or epigenetically altered stem cell may replace the other nearby stem cells by natural selection. Thus, a patch of abnormal tissue may arise. The figure in this section includes a photo of a freshly resected and lengthwise-opened segment of the colon showing a colon cancer and four polyps. Below the photo, there is a schematic diagram of how a large patch of mutant or epigenetically altered cells may have formed, shown by the large area in yellow in the diagram. Within this first large patch in the diagram (a large clone of cells), a second such mutation or epigenetic alteration may occur so that a given stem cell acquires an advantage compared to other stem cells within the patch, and this altered stem cell may expand clonally forming a secondary patch, or sub-clone, within the original patch. This is indicated in the diagram by four smaller patches of different colors within the large yellow original area. Within these new patches (sub-clones), the process may be repeated multiple times, indicated by the still smaller patches within the four secondary patches (with still different colors in the diagram) which clonally expand, until stem cells arise that generate either small polyps or else a malignant neoplasm (cancer).
In the photo, an apparent field defect in this segment of a colon has generated four polyps (labeled with the size of the polyps, 6mm, 5mm, and two of 3mm, and a cancer about 3 cm across in its longest dimension). These neoplasms are also indicated, in the diagram below the photo, by 4 small tan circles (polyps) and a larger red area (cancer). The cancer in the photo occurred in the cecal area of the colon, where the colon joins the small intestine (labeled) and where the appendix occurs (labeled). The fat in the photo is external to the outer wall of the colon. In the segment of colon shown here, the colon was cut open lengthwise to expose the inner surface of the colon and to display the cancer and polyps occurring within the inner epithelial lining of the colon.
If the general process by which sporadic colon cancers arise is the formation of a pre-neoplastic clone that spreads by natural selection, followed by formation of internal sub-clones within the initial clone, and sub-sub-clones inside those, then colon cancers generally should be associated with, and be preceded by, fields of increasing abnormality reflecting the succession of premalignant events. The most extensive region of abnormality (the outermost yellow irregular area in the diagram) would reflect the earliest event in formation of a malignant neoplasm.
In experimental evaluation of specific DNA repair deficiencies in cancers, many specific DNA repair deficiencies were also shown to occur in the field defects surrounding those cancers. The Table, below, gives examples for which the DNA repair deficiency in a cancer was shown to be caused by an epigenetic alteration, and the somewhat lower frequencies with which the same epigenetically caused DNA repair deficiency was found in the surrounding field defect.
Some of the small polyps in the field defect shown in the photo of the opened colon segment may be relatively benign neoplasms. Of polyps less than 10mm in size, found during colonoscopy and followed with repeat colonoscopies for 3 years, 25% were unchanged in size, 35% regressed or shrank in size while 40% grew in size.
Genome instability
Cancers are known to exhibit genome instability or a mutator phenotype. The protein-coding DNA within the nucleus is about 1.5% of the total genomic DNA. Within this protein-coding DNA (called the exome), an average cancer of the breast or colon can have about 60 to 70 protein altering mutations, of which about 3 or 4 may be "driver" mutations, and the remaining ones may be "passenger" mutations. However, the average number of DNA sequence mutations in the entire genome (including non-protein-coding regions) within a breast cancer tissue sample is about 20,000. In an average melanoma tissue sample (where melanomas have a higher exome mutation frequency) the total number of DNA sequence mutations is about 80,000. This compares to the very low mutation frequency of about 70 new mutations in the entire genome between generations (parent to child) in humans.
The high frequencies of mutations in the total nucleotide sequences within cancers suggest that often an early alteration in the field defects giving rise to a cancer (e.g. yellow area in the diagram in this section) is a deficiency in DNA repair. The large field defects surrounding colon cancers (extending to at about 10 cm on each side of a cancer) were shown by Facista et al. to frequently have epigenetic defects in 2 or 3 DNA repair proteins (ERCC1, XPF or PMS2) in the entire area of the field defect. Deficiencies in DNA repair cause increased mutation rates. A deficiency in DNA repair, itself, can allow DNA damages to accumulate, and error-prone translesion synthesis past some of those damages may give rise to mutations. In addition, faulty repair of these accumulated DNA damages may give rise to epimutations. These new mutations or epimutations may provide a proliferative advantage, generating a field defect. Although the mutations/epimutations in DNA repair genes do not, themselves, confer a selective advantage, they may be carried along as passengers in cells when the cells acquire additional mutations/epimutations that do provide a proliferative advantage.
Etymology
The term neoplasm is a synonym of tumor. Neoplasia denotes the process of the formation of neoplasms/tumors, and the process is referred to as a neoplastic process. The word neoplastic itself comes from Greek 'new' and 'formed, molded'.
The term tumor derives from the Latin noun 'a swelling', ultimately from the verb 'to swell'. In the British Commonwealth, the spelling tumour is commonly used, whereas in the U.S. the word is usually spelled tumor.
In its medical sense, tumor has traditionally meant an abnormal swelling of the flesh. The Roman medical encyclopedist Celsus ( 30 BC–38 AD) described the four cardinal signs of acute inflammation as , , , and (swelling, pain, increased heat, and redness). (His treatise, De Medicina, was the first medical book printed in 1478 following the invention of the movable-type printing press.)
In contemporary English, the word tumor is often used as a synonym for a cystic (liquid-filled) growth or solid neoplasm (cancerous or non-cancerous), with other forms of swelling often referred to as "swellings".
Related terms occur commonly in the medical literature, where the nouns tumefaction and tumescence (derived from the adjective tumescent'') are current medical terms for non-neoplastic swelling. This type of swelling is most often caused by inflammation caused by trauma, infection, and other factors.
Tumors may be caused by conditions other than an overgrowth of neoplastic cells, however. Cysts (such as sebaceous cysts) are also referred to as tumors, even though they have no neoplastic cells. This is standard in medical-billing terminology (especially when billing for a growth whose pathology has yet to be determined).
| Biology and health sciences | Cancer | null |
1237393 | https://en.wikipedia.org/wiki/Ligand%20field%20theory | Ligand field theory | Ligand field theory (LFT) describes the bonding, orbital arrangement, and other characteristics of coordination complexes. It represents an application of molecular orbital theory to transition metal complexes. A transition metal ion has nine valence atomic orbitals - consisting of five nd, one (n+1)s, and three (n+1)p orbitals. These orbitals have the appropriate energy to form bonding interactions with ligands. The LFT analysis is highly dependent on the geometry of the complex, but most explanations begin by describing octahedral complexes, where six ligands coordinate with the metal. Other complexes can be described with reference to crystal field theory. Inverted ligand field theory (ILFT) elaborates on LFT by breaking assumptions made about relative metal and ligand orbital energies.
History
Ligand field theory resulted from combining the principles laid out in molecular orbital theory and crystal field theory, which describe the loss of degeneracy of metal d orbitals in transition metal complexes. John Stanley Griffith and Leslie Orgel championed ligand field theory as a more accurate description of such complexes, although the theory originated in the 1930s with the work on magnetism by John Hasbrouck Van Vleck. Griffith and Orgel used the electrostatic principles established in crystal field theory to describe transition metal ions in solution and used molecular orbital theory to explain the differences in metal-ligand interactions, thereby explaining such observations as crystal field stabilization and visible spectra of transition metal complexes. In their paper, they proposed that the chief cause of color differences in transition metal complexes in solution is the incomplete d orbital subshells. That is, the unoccupied d orbitals of transition metals participate in bonding, which influences the colors they absorb in solution. In ligand field theory, the various d orbitals are affected differently when surrounded by a field of neighboring ligands and are raised or lowered in energy based on the strength of their interaction with the ligands.
Bonding
σ-bonding (sigma bonding)
In an octahedral complex, the molecular orbitals created by coordination can be seen as resulting from the donation of two electrons by each of six σ-donor ligands to the d-orbitals on the metal. In octahedral complexes, ligands approach along the x-, y- and z-axes, so their σ-symmetry orbitals form bonding and anti-bonding combinations with the dz2 and dx2−y2 orbitals. The dxy, dxz and dyz orbitals remain non-bonding orbitals. Some weak bonding (and anti-bonding) interactions with the s and p orbitals of the metal also occur, to make a total of 6 bonding (and 6 anti-bonding) molecular orbitals
In molecular symmetry terms, the six lone-pair orbitals from the ligands (one from each ligand) form six symmetry-adapted linear combinations (SALCs) of orbitals, also sometimes called ligand group orbitals (LGOs). The irreducible representations that these span are a1g, t1u and eg. The metal also has six valence orbitals that span these irreducible representations - the s orbital is labeled a1g, a set of three p-orbitals is labeled t1u, and the dz2 and dx2−y2 orbitals are labeled eg. The six σ-bonding molecular orbitals result from the combinations of ligand SALCs with metal orbitals of the same symmetry.
π-bonding (pi bonding)
π bonding in octahedral complexes occurs in two ways: via any ligand p-orbitals that are not being used in σ bonding, and via any π or π* molecular orbitals present on the ligand.
In the usual analysis, the p-orbitals of the metal are used for σ bonding (and have the wrong symmetry to overlap with the ligand p or π or π* orbitals anyway), so the π interactions take place with the appropriate metal d-orbitals, i.e. dxy, dxz and dyz. These are the orbitals that are non-bonding when only σ bonding takes place.
One important π bonding in coordination complexes is metal-to-ligand π bonding, also called π backbonding. It occurs when the LUMOs (lowest unoccupied molecular orbitals) of the ligand are anti-bonding π* orbitals. These orbitals are close in energy to the dxy, dxz and dyz orbitals, with which they combine to form bonding orbitals (i.e. orbitals of lower energy than the aforementioned set of d-orbitals). The corresponding anti-bonding orbitals are higher in energy than the anti-bonding orbitals from σ bonding so, after the new π bonding orbitals are filled with electrons from the metal d-orbitals, ΔO has increased and the bond between the ligand and the metal strengthens. The ligands end up with electrons in their π* molecular orbital, so the corresponding π bond within the ligand weakens.
The other form of coordination π bonding is ligand-to-metal bonding. This situation arises when the π-symmetry p or π orbitals on the ligands are filled. They combine with the dxy, dxz and dyz orbitals on the metal and donate electrons to the resulting π-symmetry bonding orbital between them and the metal. The metal-ligand bond is somewhat strengthened by this interaction, but the complementary anti-bonding molecular orbital from ligand-to-metal bonding is not higher in energy than the anti-bonding molecular orbital from the σ bonding. It is filled with electrons from the metal d-orbitals, however, becoming the HOMO (highest occupied molecular orbital) of the complex. For that reason, ΔO decreases when ligand-to-metal bonding occurs.
The greater stabilization that results from metal-to-ligand bonding is caused by the donation of negative charge away from the metal ion, towards the ligands. This allows the metal to accept the σ bonds more easily. The combination of ligand-to-metal σ-bonding and metal-to-ligand
π-bonding is a synergic effect, as each enhances the other.
As each of the six ligands has two orbitals of π-symmetry, there are twelve in total. The symmetry adapted linear combinations of these fall into four triply degenerate irreducible representations, one of which is of t2g symmetry. The dxy, dxz and dyz orbitals on the metal also have this symmetry, and so the π-bonds formed between a central metal and six ligands also have it (as these π-bonds are just formed by the overlap of two sets of orbitals with t2g symmetry.)
High and low spin and the spectrochemical series
The six bonding molecular orbitals that are formed are "filled" with the electrons from the ligands, and electrons from the d-orbitals of the metal ion occupy the non-bonding and, in some cases, anti-bonding MOs. The energy difference between the latter two types of MOs is called ΔO (O stands for octahedral) and is determined by the nature of the π-interaction between the ligand orbitals with the d-orbitals on the central atom. As described above, π-donor ligands lead to a small ΔO and are called weak- or low-field ligands, whereas π-acceptor ligands lead to a large value of ΔO and are called strong- or high-field ligands. Ligands that are neither π-donor nor π-acceptor give a value of ΔO somewhere in-between.
The size of ΔO determines the electronic structure of the d4 - d7 ions. In complexes of metals with these d-electron configurations, the non-bonding and anti-bonding molecular orbitals can be filled in two ways: one in which as many electrons as possible are put in the non-bonding orbitals before filling the anti-bonding orbitals, and one in which as many unpaired electrons as possible are put in. The former case is called low-spin, while the latter is called high-spin. A small ΔO can be overcome by the energetic gain from not pairing the electrons, leading to high-spin. When ΔO is large, however, the spin-pairing energy becomes negligible by comparison and a low-spin state arises.
The spectrochemical series is an empirically-derived list of ligands ordered by the size of the splitting Δ that they produce. It can be seen that the low-field ligands are all π-donors (such as I−), the high field ligands are π-acceptors (such as CN− and CO), and ligands such as H2O and NH3, which are neither, are in the middle.
I− < Br− < S2− < SCN− < Cl− < NO3− < N3− < F− < OH− < C2O42− < H2O < NCS− < CH3CN < py (pyridine) < NH3 < en (ethylenediamine) < bipy (2,2'-bipyridine) < phen (1,10-phenanthroline) < NO2− < PPh3 < CN− < CO
| Physical sciences | Bond structure | Chemistry |
1237983 | https://en.wikipedia.org/wiki/Tamarin | Tamarin | The tamarins are squirrel-sized New World monkeys from the family Callitrichidae in the genus Saguinus. They are the first offshoot in the Callitrichidae tree, and therefore are the sister group of a clade formed by the lion tamarins, Goeldi's monkeys and marmosets.
Taxonomy and evolutionary history
Hershkovitz (1977) recognised ten species in the genus Saguinus, further divided into 33 morphotypes based on facial pelage. A later classification into two clades was based on variations in dental measurements. A taxonomic review (Rylands et al., 2016) showed the tamarins are a sister group to all other callitrichids, branching off 15–13 million years ago. Within this clade, six species groups are historically recognised, nigricollis, mystax, midas, inustus, bicolor and oedipus, five of which were shown to be valid with Sanguinus inustus placed within the midas group. The review noted that the smaller-bodied nigricollis group began diverging 11–8 million years ago, leading the authors to move them to a separate genus, Leontocebus (saddle-back tamarins). While a 2018 study proposed that Leontocebus does not have sufficient divergence from Saguinus to be in its own genus, and thus should be reclassified it as a subgenus of Saguinus, this proposal has since found significant traction. The same study found the mystax group of tamarins to be distinct enough to be classified in the subgenus Tamarinus. As of 2021 this proposal has not been universally accepted by primatologists.
Taxonomic classification
Following the taxonomic review of tamarins by Rylands et al. (2016) and Garbino & Martins-Junior (2018), there are 22 species in the genus Saguinus with 19 subspecies.
Genus Saguinus
Subgenus Saguinus Hoffmannsegg, 1807
S. midas group
Golden-handed tamarin, midas tamarin, or red-handed tamarin, Saguinus midas
Western black-handed tamarin or black tamarin, Saguinus niger
Eastern black-handed tamarin, Saguinus ursulus
S. bicolor group
Pied tamarin, Saguinus bicolor
Martins's tamarin, Saguinus martinsi
Martins's bare-face tamarin, Saguinus martinsi martinsi
Ochraceus bare-face tamarin, Saguinus martinsi ochraceus
S. oedipus group
Cotton-top tamarin or Pinché tamarin, Saguinus oedipus
Geoffroy's tamarin, Saguinus geoffroyi
White-footed tamarin, Saguinus leucopus
Subgenus Tamarinus Trouessart, 1904
Moustached tamarin, Saguinus mystax
Spix's moustached tamarin, Saguinus mystax mystax
Red-capped tamarin, Saguinus mystax pileatus
White-rump moustached tamarin, Saguinus mystax pluto
White-lipped tamarin, Saguinus labiatus
Geoffroy's red-bellied tamarin, Saguinus labiatus labiatus
Thomas's red-bellied tamarin, Saguinus labiatus thomasi
Gray's red-bellied tamarin, Saguinus labiatus rufiventer
Emperor tamarin, Saguinus imperator
Black-chinned emperor tamarin, Saguinus imperator imperator
Bearded emperor tamarin, Saguinus imperator subgrisescens
Mottle-faced tamarin, Saguinus inustus
Description
Tamarin species vary considerably in appearance, ranging from nearly all black through mixtures of black, brown and white. Mustache-like facial hairs are typical for many species. Their body size ranges from (plus a tail). They weigh from . In captivity, red-bellied tamarins have been recorded living up to 20.5 years, while cotton-top tamarins can live up to 23 years old.
Distribution
Tamarins range from southern Central America through central South America, where they are found in northwestern Colombia, the Amazon basin, and the Guianas.
Behavior and reproduction
Tamarins are inhabitants of tropical rainforests and open forest areas. They are diurnal and arboreal, and run and jump quickly through the trees. Tamarins live together in groups of up to 40 members consisting of one or more families. More frequently, though, groups are composed of just three to nine members.
Tamarins are omnivores, eating fruits and other plant parts as well as spiders, insects, small vertebrates and bird eggs.
Gestation is typically 140 days, and births are normally twins. The adult males, subadults, and juveniles in the group assist with caring for the young, bringing them to their mother to nurse. After approximately one month the young begin to eat solid food, although they are not fully weaned for another two to three months. They reach full maturity in their second year. Tamarins are almost exclusively polyandrous.
Cottontop tamarins (Saguinus oedipus) breed cooperatively in the wild. Cronin, Kurian, and Snowdon tested eight cottontop tamarins in a series of cooperative pulling experiments. Two monkeys were put on opposite sides of a transparent apparatus containing food. Only if both monkeys pulled a handle on their side of the apparatus towards themselves at the same time would food drop down for them to obtain. The results showed that tamarins pulled the handles at a lower rate when alone with the apparatus than when in the presence of a partner. Cronin, Kurian, and Snowdon concluded from this that cottontop tamarins have a good understanding of cooperation. They suggest that cottontop tamarins have developed cooperative behaviour as a cognitive adaptation.
In some locations, saddle-back tamarins (subgenus Leontocebus) live sympatrically with tamarins of the subgenus Sanguinus, but the saddle-back tamarins typically occupy lower strata of the forest than do the Sanguinus species. Saddle-back tamarins have longer and narrower hands than Sanguinus species, possibly adaption to differing foraging behavior, as saddle-back tamarins are more likely to search for insects that are hidden in knotholes, crevices, bromeliad tanks and leaf litter, while Sanguinus species are more likely to forage for insects that are exposed on surfaces such as leaves or branches. This differentiation in lifestyles was why both were formerly considered different genera.
Predators
While tamarins spend much of their day foraging, they must be on high alert for aerial and terrestrial predators. Due to their small size compared to other primates, they are an easy target for predatory birds, snakes, and mammals.
| Biology and health sciences | New World monkeys | Animals |
1238041 | https://en.wikipedia.org/wiki/Abies%20grandis | Abies grandis | Abies grandis (grand fir, giant fir, lowland white fir, great silver fir, western white fir, Vancouver fir, or Oregon fir) is a fir native to northwestern North America, occurring at altitudes of sea level to . It is a major constituent of the Grand Fir/Douglas Fir Ecoregion of the Cascade Range.
The tree typically grows to in height, and may be the tallest Abies species in the world. There are two varieties, the taller coast grand fir, found west of the Cascade Mountains, and the shorter interior grand fir, found east of the Cascades. It was first described in 1831 by David Douglas.
It is closely related to white fir. The bark was historically believed to have medicinal properties, and it is popular in the United States as a Christmas tree. Its lumber is a softwood, and it is harvested as a hem fir. It is used in paper-making, as well as construction for framing and flooring, where it is desired for its resistance to splitting and splintering.
Description
Abies grandis is a large evergreen conifer growing to tall, exceptionally , with a trunk diameter of up to . The dead tree tops sometimes fork into new growth. The bark is thick, reddish to gray (but purple within), furrowed, and divided into slender plates. The leaves are needle-like, flattened, long and wide by 0.5 mm thick, glossy dark green above, with two green-white bands of stomata below, and slightly notched at the tip. The leaf arrangement is spiral on the shoot, but with each leaf variably twisted at the base so they all lie in two more-or-less flat ranks on either side of the shoot. On the lower leaf surface, two green-white bands of stomata are prominent. The base of each leaf is twisted a variable amount so that the leaves are nearly coplanar.
The green-to-reddish cones are long and broad, with about 100–150 scales; the scale bracts are short, and hidden in the closed cone. The winged seeds are released when the cones disintegrate at maturity about 6 months after pollination.
Varieties
There are two varieties, probably better treated at subspecies rank though not yet formally published as such:
Abies grandis var. grandis. Coast grand fir. Coastal lowland forests, at sea level to 900 m altitude, from Vancouver Island and coastal British Columbia, south to Sonoma County, California. A large, very fast-growing tree to 70 m tall. Foliage strongly flattened on all shoots. Cones slightly narrower (mostly less than 4 cm broad), with thinner, fairly flexible scales. Tolerates winter temperatures down to about -25° to -30 °C; growth on good sites may exceed 1.5 m per year when young.
Abies grandis var. idahoensis. Interior grand fir. Interior forests, at (600–) 900–1800 m altitude, on the east slope of the Cascades in Washington and in the Rocky Mountains from southeast British Columbia south to central Idaho, northeast Oregon and western Montana. A smaller, slow-growing tree to 40–45 m tall. Foliage not strongly flattened on all shoots, the leaves often raised above the shoot, particularly on upper crown shoots. Cones slightly stouter (mostly over 4 cm broad), with thicker, slightly woody scales. Tolerates winter temperatures down to about -40 °C; growth on good sites not exceeding 0.6 m per year even when young.
Grand fir is very closely related to white fir (Abies concolor), and intergrades with it in central Oregon. Firs of the Blue Mountains and Oregon East Cascade Slope are intermediate between the two species in genetics and appearance. The intergrades are often referred to as "Abies grandis x concolor", a variety which itself intergrades into Abies concolor lowiana farther south, around the California state line.
Taxonomy
The species was first described by Scottish botanical explorer David Douglas, who in 1830 brought its seeds back to Britain; in 1831 he described specimens he had collected along the Columbia River in the Pacific Northwest.
Distribution and habitat
The coastal variety of grand fir grows in temperate rainforest environments along the Pacific coast from southwest British Columbia to Northern California, with the inland variety growing in montane conifer forests of eastern Washington, the Idaho Panhandle, and far western Montana. It can be found growing at elevations of up to . Habitats typically receive at least of annual rainfall, but are still too dry or outside the range of more shade-tolerant competitors like western hemlock and western redcedar. Along with the closely related white fir, grand fir is more shade tolerant than Douglas-fir.
Ecology
Due to wildfire suppression, grand fir was able to proliferate in areas previously dominated by the relatively fire-resistant inland Douglas fir, ponderosa pine, and western larch. The lack of smaller fires allows both grand and white fir saplings to form a fuel ladder, enabling crown fires. Grand fir's bark is thinner than that of white fir, making the former species more susceptible to threats like fire and rot.
Specimens have historically been able to live up to nearly 300 years, but in modern stressed conditions, 100 years is more typical. A number of defoliating insects threaten the tree; in the late 20th century, western spruce budworm epidemics killed sizable populations of grand fir in the eastern Cascades and Blue Mountains. The lack of an ability to use pitch to patch wounds, including those from logging and small fires, provides a weakness exploited by rot fungi. East of the Cascade ridge, grand fir trunks are infected by Indian paint fungus, indicating a rotten core; such specimens are often waterlogged and thus crack apart in freezing weather.
Pileated woodpeckers search grand and white firs for insects and places to nest. Rotten cores open shelters for various animals, including black bears.
Uses
The boughs create a rain shelter for humans.
Native Americans used both grand fir and white fir, powdering the bark or pitch to treat tuberculosis or skin ailments; the Nlaka'pamux used the bark to cover lodges and make canoes, and branches were used as bedding. The inner bark of the grand fir was used by some Plateau Indian tribes for treating colds and fever.
The Okanagan-Colville tribe used the species as a strengthening drug to nullify the feeling of weakness.
The foliage has an attractive citrus-like scent. It is sometimes used for Christmas decorations in the United States, including Christmas trees, although its stiff branches do not allow it to be economically packed. It is also planted as an ornamental tree in large parks.
Timber
The lumber is non-resinous and fine textured.
In the North American logging industry, the grand fir is often referred to as "hem fir", with hem fir being a number of species with interchangeable types of wood (specifically the California red fir, noble fir, Pacific silver fir, white fir, and western hemlock). Grand fir is often shipped along with these other species. It can also referred to as "white fir" lumber, an umbrella term also referring to Abies amabilis, Abies concolor, and Abies magnifica.
Lumber from the grand fir is considered a softwood. As such, it is used for paper making, packing crates, and construction. Hem fir is frequently used for framing, and is able to meet the building code span requirements of numerous construction projects.
As a hem fir, the trunk of the grand fir is considered slightly below the "Douglas fir-larch" species combination in strength, and stronger than the "Douglas fir-South" and "spruce-pine-fir (South)" species combos (both umbrella terms for a number of species with similar wood). Because it is nearly as strong as Douglas fir-larch, it often meets the structural load-bearing requirements for framing in residential, light commercial, and heavy construction. Excluding Douglas fir-larch, hem fir's modulus of elasticity value as a stiffness factor in floor systems (denoted as MOE or E) is stronger than all other western species combinations. Hem fir is preferred by many builders because of its ability to hold and not be split by nails and screws, and its low propensity for splintering when sawed.
Notable specimens
In February 2022, a coast grand fir growing south of Bergen was found to be Norway's tallest tree with height of .
| Biology and health sciences | Pinaceae | Plants |
1238181 | https://en.wikipedia.org/wiki/Blueschist | Blueschist | Blueschist (), also called glaucophane schist, is a metavolcanic rock that forms by the metamorphism of basalt and rocks with similar composition at high pressures and low temperatures (), approximately corresponding to a depth of . The blue color of the rock comes from the presence of the predominant minerals glaucophane and lawsonite.
Blueschists are schists typically found within orogenic belts as terranes of lithology in faulted contact with greenschist or rarely eclogite facies rocks.
Petrology
Blueschist, as a rock type, is defined by the presence of the minerals glaucophane + ( lawsonite or epidote ) +/- jadeite +/- albite or chlorite +/- garnet +/- muscovite in a rock of roughly basaltic composition.
Blueschist often has a lepidoblastic, nematoblastic or schistose rock microstructure defined primarily by chlorite, phengitic white mica, glaucophane, and other minerals with an elongate or platy shape.
Grain size is rarely coarse, as mineral growth is retarded by the swiftness of the rock's metamorphic trajectory and perhaps more importantly, the low temperatures of metamorphism and in many cases the anhydrous state of the basalts. However, porphyritic varieties do occur. Blueschists may appear blue, black, gray, or blue-green in outcrop.
Blueschist facies
Blueschist facies is determined by the particular temperature and pressure conditions required to metamorphose basalt to form blueschist. Felsic rocks and pelitic sediments which are subjected to blueschist facies conditions will form different mineral assemblages than metamorphosed basalt. Thereby, these rocks do not appear blue overall in color.
Blueschist mineralogy varies by rock composition, but the classic equilibrium assemblages of blueschist facies are:
Basalts: glaucophane + lawsonite and/or epidote + albite + titanite +/- garnet +/- quartz jadeite + quartz - diagnostic of pressures ~> 10 kbar
Ultramafic rocks: serpentinite/lizardite +/- talc +/- zoisite
Pelites: Fe-Mg-carpholite +/- chloritoid +/- kyanite + zoisite +/- pargasite or phengite +/- albite +/- quartz +/- talc +/- garnet
Granites: kyanite +/- paragonite +/- chlorite +/- albite +/- quartz +/- pargasite or phengite
Calc-silicates: Various
Limestones and marble: calcite transforms to aragonite at high pressure, but typically reverts to calcite when exhumed
Blueschist facies generally is considered to form under pressures of >0.6 GPa, equivalent to depth of burial in excess of 15–18 km, and at temperatures of between 200 and 500 °C. This is a 'low temperature, high pressure' prograde metamorphic path and is also known as the Franciscan facies series, after the west coast of the United States where these rocks are exposed. Well-exposed blueschists also occur in Greece, Turkey, Japan, New Zealand and New Caledonia.
Continued subduction of blueschist facies oceanic crust will produce eclogite facies assemblages in metamorphosed basalt (garnet + omphacitic clinopyroxene). Rocks which have been subjected to blueschist conditions during a prograde trajectory will gain heat by conduction with hotter lower crustal rocks if they remain at the 15–18 km depth. Blueschist which heats up to greater than 500 °C via this fashion will enter greenschist or eclogite facies temperature-pressure conditions, and the mineral assemblages will metamorphose to reflect the new facies conditions.
Thus in order for blueschist facies assemblages to be seen at the Earth's surface, the rock must be exhumed swiftly enough to prevent total thermal equilibration of the rocks which are under blueschist facies conditions with the typical geothermal gradient.
Blueschists and other high-pressure subduction zone rocks are thought to be exhumed rapidly by flow and/or faulting in accretionary wedges or the upper parts of subducted crust, or may return to the Earth's surface in part owing to buoyancy if the metabasaltic rocks are associated with low-density continental crust (marble, metapelite, and other rocks of continental margins).
It has been held that the absence of blueschist dating to before the Neoproterozoic Era indicates that currently exhumed rocks never reached blueschist facies at subduction zones before 1,000 million years ago. This assertion is arguably wrong because the earliest oceanic crust would have contained more magnesium than today's crust and, therefore, would have formed greenschist-like rocks at blueschist facies.
History and etymology
In Minoan Crete blueschist and greenschist were used to pave streets and courtyards between 1650 and 1600 BC. These rocks were likely quarried in Agia Pelagia on the north coast of central Crete.
In 1962, Edgar Bailey of the U.S. Geological Survey, introduced the concept of "blueschist" into the subject of metamorphic geology. His carefully constructed definition established the pressure and temperature conditions which produce this type of metamorphism.
| Physical sciences | Metamorphic rocks | Earth science |
1238210 | https://en.wikipedia.org/wiki/Enol | Enol | In organic chemistry, enols are a type of functional group or intermediate in organic chemistry containing a group with the formula (R = many substituents). The term enol is an abbreviation of alkenol, a portmanteau deriving from "-ene"/"alkene" and the "-ol". Many kinds of enols are known.
Keto–enol tautomerism refers to a chemical equilibrium between a "keto" form (a carbonyl, named for the common ketone case) and an enol. The interconversion of the two forms involves the transfer of an alpha hydrogen atom and the reorganisation of bonding electrons. The keto and enol forms are tautomers of each other.
Enolization
Organic esters, ketones, and aldehydes with an α-hydrogen ( bond adjacent to the carbonyl group) often form enols. The reaction involves migration of a proton () from carbon to oxygen:
In the case of ketones, the conversion is called a keto-enol tautomerism, although this name is often more generally applied to all such tautomerizations. Usually the equilibrium constant is so small that the enol is undetectable spectroscopically.
In some compounds with two (or more) carbonyls, the enol form becomes dominant. The behavior of 2,4-pentanedione illustrates this effect:
Enols are derivatives of vinyl alcohol, with a connectivity. Deprotonation of organic carbonyls gives the enolate anion, which are a strong nucleophile. A classic example for favoring the keto form can be seen in the equilibrium between vinyl alcohol and acetaldehyde (K = [enol]/[keto] ≈ 3). In 1,3-diketones, such as acetylacetone (2,4-pentanedione), the enol form is favored.
The acid-catalyzed conversion of an enol to the keto form proceeds by proton transfer from O to carbon. The process does not occur intramolecularly, but requires participation of solvent or other mediators.
Stereochemistry of ketonization
If R1 and R2 (note equation at top of page) are different substituents, there is a new stereocenter formed at the alpha position when an enol converts to its keto form. Depending on the nature of the three R groups, the resulting products in this situation would be diastereomers or enantiomers.
Enediols
Enediols are alkenes with a hydroxyl group on each carbon of the C=C double bond. Normally such compounds are disfavored components in equilibria with acyloins. One special case is catechol, where the C=C subunit is part of an aromatic ring. In some other cases however, enediols are stabilized by flanking carbonyl groups. These stabilized enediols are called reductones. Such species are important in glycochemistry, e.g., the Lobry de Bruyn-van Ekenstein transformation.
Ribulose-1,5-bisphosphate is a key substrate in the Calvin cycle of photosynthesis. In the Calvin cycle, the ribulose equilibrates with the enediol, which then binds carbon dioxide. The same enediol is also susceptible to attack by oxygen (O2) in the (undesirable) process called photorespiration.
Phenols
Phenols represent a kind of enol. For some phenols and related compounds, the keto tautomer plays an important role. Many of the reactions of resorcinol involve the keto tautomer, for example. Naphthalene-1,4-diol exists in observable equilibrium with the diketone tetrahydronaphthalene-1,4-dione.
Biochemistry
Keto–enol tautomerism is important in several areas of biochemistry.
The high phosphate-transfer potential of phosphoenolpyruvate results from the fact that the phosphorylated compound is "trapped" in the less thermodynamically favorable enol form, whereas after dephosphorylation it can assume the keto form.
The enzyme enolase catalyzes the dehydration of 2-phosphoglyceric acid to the enol phosphate ester. Metabolism of PEP to pyruvic acid by pyruvate kinase (PK) generates adenosine triphosphate (ATP) via substrate-level phosphorylation.
Reactivity
Addition of electrophiles
The terminus of the double bond in enols is nucleophilic. Its reactions with electrophilic organic compounds is important in biochemistry as well as synthetic organic chemistry. In the former area, the fixation of carbon dioxide involves addition of CO2 to an enol.
Deprotonation: enolates
Deprotonation of enolizable ketones, aldehydes, and esters gives enolates. Enolates can be trapped by the addition of electrophiles at oxygen. Silylation gives silyl enol ether. Acylation gives
esters such as vinyl acetate.
Stable enols
In general, enols are less stable than their keto equivalents because of the favorability of the C=O double bond over C=C double bond. However, enols can be stabilized kinetically or thermodynamically.
Some enols are sufficiently stabilized kinetically so that they can be characterized.
Delocalization can stabilize the enol tautomer. Thus, very stable enols are phenols. Another stabilizing factor in 1,3-dicarbonyls is intramolecular hydrogen bonding. Both of these factors influence the enol-dione equilibrium in acetylacetone.
| Physical sciences | Concepts: General | Chemistry |
1238513 | https://en.wikipedia.org/wiki/Plesiosaurus | Plesiosaurus | Plesiosaurus (Greek: (), near to + (), lizard) is a genus of extinct, large marine sauropterygian reptile that lived during the Early Jurassic. It is known by nearly complete skeletons from the Lias of England. It is distinguishable by its small head, long and slender neck, broad turtle-like body, a short tail, and two pairs of large, elongated paddles. It lends its name to the order Plesiosauria, of which it is an early, but fairly typical member. It contains only one species, the type, Plesiosaurus dolichodeirus. Other species once assigned to this genus, including P. brachypterygius, P. guilielmiimperatoris, and P. tournemirensis have been reassigned to new genera, such as Hydrorion, Seeleyosaurus and Occitanosaurus.
Discovery
The first complete skeleton of Plesiosaurus was discovered by early paleontologist and fossil hunter Mary Anning in Sinemurian (Early Jurassic)-age rocks of the lower Lias Group in December 1823. Additional fossils of Plesiosaurus were found in rocks of the Lias Group of Dorset for many years, "until the cessation of quarrying activities in the Lias Group, early in this [20th] century." although less complete remains were used by Henry De la Beche and William Conybeare to name the species two years earlier in 1821, and despite being discovered first, Conybeare's remains were not the holotype; Anning's were.
Plesiosaurus was one of the first of the "antediluvian reptiles" to be discovered and excited great interest in 19th-century England. It was so-named ("near lizard") by William Conybeare and Henry De la Beche, to indicate that it was more like a normal reptile than Ichthyosaurus, which had been found in the same rock strata just a few years earlier. Plesiosaurus is the archetypical genus of Plesiosauria and the first to be described, hence lending its name to the order. Conybeare and De la Beche coined the name for scattered finds from the Bristol region, Dorset, and Lyme Regis in 1821. The type species of Plesiosaurus, P. dolichodeirus, was named and described by Conybeare in 1824 on the basis of Anning's original finds.
Description
Skull and dentition
Compared to other plesiosaur genera, Plesiosaurus has a small head. The skull is much narrower than long, reaching its greatest width just behind the eyes (the postorbital bar). The anterior portion is "bluntly triangular". In lateral view, the skull reaches its highest point at the rear of the skull table. "The external nostrils overlie the internal nares". They are not positioned at the tip of the snout, but farther back, nearer the eyes than the tip of the skull. Unlike the nostrils of Rhomaleosaurus, they do not appear to be adapted for underwater olfaction. The orbits (eye sockets) are roughly circular and are positioned about halfway along the length of the skull. They face up and to the sides. Just posterior to the orbits are the supratemporal fenestrae, which are about the same size as the orbits and also roughly circular. Between the four openings is the pineal foramen, and between the temporal fenestrae is a narrow sagittal ridge. As in other plesiosaurs, the pterygoids of the palate are fused to the basioccipital of the braincase, although the union is not as robust as in the pliosaurs Rhomaleosaurus and Pliosaurus. "The palatal bones are thin, but there is no suborbital fenestra."
The two rami of the lower jaw make a "V" shape with an angle of about 45°. The specialized region where they meet, the symphysis, is robust. The two rami are fused at the symphysis, making a pointed, shallow scoop-like shape.
The teeth of Plesiosaurus are "simple, needle-like cones" that are "slightly curved and circular in transverse section". They are sharply pointed with fine striations running from tip to base, and point forward (procumbent). This procumbency becomes more pronounced near the leading end of the skull, where they may be only 10–15° above horizontal. There are 20 to 25 teeth per upper jaw tooth row, and 24 per low jaw tooth row. Up to four teeth of a lower jaw's tooth row are found in the symphyseal region.
Vertebral column
Plesiosaurus was a moderately sized plesiosaur that grew to in length. There are approximately 40 cervical vertebrae (neck vertebrae), with different specimens preserving 38 to 42 cervical vertebrae. Of the rest of the vertebral column, there are a handful (four or five in the holotype specimen) of "pectoral" vertebrae from the neck-torso transition, approximately 21 dorsal or back vertebrae, three or more sacral vertebrae, and at least 28 caudal vertebrae. Generally, the centra of the cervical vertebrae are relatively elongated, being slightly longer than tall. The width, however, is usually greater than or equal to the length. The articular surfaces of the cervical centra are "slightly concave and kidney-shaped, with rounded, slightly rugose edges." Small holes called foramina subcentralia are found on the ventral surface of the centra. Some of the dorsals have rugose articular edges, like the cervicals; this feature is typically absent from the caudals.
Ribs are found from the neck to the tail. Cervical ribs are hatchet-shaped and have two articular heads. Dorsal ribs are thick and have only one head. Sacral ribs are "short, robust, and blunt or knob-like on both ends." Caudal ribs have different morphologies depending on their location along the tail, with anterior examples being pointed and more distal examples being "broad and blunt." Plesiosaurus also has gastralia, also known as "belly ribs." Nine or more sets of gastralia are present between the shoulder and pelvis. Each set is composed of seven elements: a bone on the midline flanked by three lateral elements.
Limbs
The shoulder girdle is only partly known but appears to be typical for plesiosaurs. It includes fused clavicles at the anterior end, scapulae (shoulder blades), and large coracoids. The scapulae and coracoids both contribute to the glenoids (arm sockets). A pair of oval holes called pectoral fenestrae are found midway along the scapular/coracoid contacts. The forelimbs are elongate and relatively narrow compared to those of most plesiosaurs. The humerus (upper arm bone) has distinctive curvature, which appears to be a retained primitive feature among sauropterygians. Mature Plesiosaurus also have a distinctive groove along the ventral surface of the humerus. The forearm includes a flat, broad, crescent-shaped ulna and a "robust and pillar-like" radius. The wrist includes six bones. The hand paddle has five digits; the phalangeal formula is uncertain, but the count for one large individual, from "thumb" to fifth "finger", is 4-8-9-8-6.
The pelvis includes equant pubic bones, ischia, and blade-shaped ilia connecting the pelvis to the vertebral column. The acetabulum is formed by surfaces on the pubic bones and ischia. Similar to the pectoral girdle, there is a pair of holes between the ischia and pubic bones. The hindlimbs are long and narrow, and in adults, they are much smaller than the forelimbs. The thigh bones are straight. The lower hindlimb includes two roughly equal-sized bones, the robust tibia and the semilunate-shaped fibula. There are six bones in the ankle. The foot paddle includes five digits. Like the hand, the phalangeal formula is uncertain, but is at least 3-7-9-8-7 from innermost to outer "toe".
Classifications
Plesiosaurus has historically been a wastebasket taxon. This is due in part to few anatomical or taxonomic studies of the relevant fossils. Uncritical taxonomic work resulted in hundreds of species representing most of the world and most of the Mesozoic being assigned to Plesiosaurus. None of the younger Jurassic or Cretaceous species belong to Plesiosaurus. Review of the Early Jurassic species indicates that the only English species properly assigned to Plesiosaurus is P. dolichodeirus. Several other European Early Jurassic species have been assigned to new genera. P. brachypterygius, P. guilielmiimperatoris and P. tournemirensis, for example, were assigned to the new genera Hydrorion, Seeleyosaurus and Occitanosaurus.
The following cladogram follows an analysis by Benson et al., 2012, and shows the placement of Plesiosaurus within Plesiosauria.
Palaeobiology
Plesiosaurus fed mainly on clams and snails, and is thought to have eaten belemnites, fish and other prey as well. Its U-shaped jaw and sharp teeth would have been like a fish trap. It propelled itself by the paddles, the tail being too short to be of much use. Its neck could have been used as a rudder when navigating during a chase. Plesiosaurus gave live birth to live young in the water like most sea snakes. The young might have lived in estuaries before moving out into the open ocean.
It has been postulated that the long neck of Plesiosaurus would have been a hindrance when trying to speed up, any bend in the neck creating turbulences. If that is the case then Plesiosaurus would have had to keep its neck straight to achieve good acceleration, something that would make hunting difficult. For this reason it may be possible that these animals would actually lie in wait for prey to come close instead of trying to pursue them.
Palaeoenvironment
Unequivocal specimens of Plesiosaurus dolichodeirus are limited to the Lyme Regis area of Dorset. It appears to be the most common species of plesiosaur in the Lias Group of England. Plesiosaurus is best represented from the "upper part of the Blue Lias, the 'Shales with Beef,' and the lower Black Ven Marls" the latter of which form part of the Charmouth Mudstone; using the Lias Group ammonite fossil zones, these rocks date to the early Sinemurian stage. Some other Plesiosaurus fossils are from later Sinemurian rocks. The oldest specimen may be a skull thought to come from late Rhaetian or early Hettangian rocks.
| Biology and health sciences | Prehistoric marine reptiles | Animals |
1238652 | https://en.wikipedia.org/wiki/Capuchin%20monkey | Capuchin monkey | The capuchin monkeys () are New World monkeys of the subfamily Cebinae. They are readily identified as the "organ grinder" monkey, and have been used in many movies and television shows. The range of capuchin monkeys includes some tropical forests in Central America and South America as far south as northern Argentina. In Central America, where they are called white-faced monkeys ("carablanca"), they usually occupy the wet lowland forests on the Caribbean coast of Costa Rica and Panama and deciduous dry forest on the Pacific coast.
Etymology
The word "capuchin" derives from the Order of Friars Minor Capuchin, who wear brown robes with large hoods. When Portuguese explorers reached the Americas in the 15th century, they found small monkeys whose coloring resembled these friars, especially when in their robes with hoods down, and named them capuchins. When the scientists described a specimen (thought to be a golden-bellied capuchin) they noted that: "his muzzle of a tanned color, ... with the lighter color around his eyes that melts into the white at the front, his cheeks ..., give him the looks that involuntarily reminds us of the appearance that historically in our country represents ignorance, laziness, and sensuality." The scientific name of the genus, Cebus comes from the Greek word kêbos, meaning a long-tailed monkey.
Classification
The species-level taxonomy of this subfamily remains highly controversial, and alternative treatments than the one listed below have been suggested.
In 2011, Jessica Lynch Alfaro et al. proposed that the robust capuchins (formerly the C. apella group) be placed in a separate genus, Sapajus, from the gracile capuchins (formerly the C. capucinus group) which retain the genus Cebus. Other primatologists, such as Paul Garber, have begun using this classification.
According to genetic studies led by Lynch Alfaro in 2011, the gracile and robust capuchins diverged approximately 6.2 million years ago. Lynch Alfaro suspects that the divergence was triggered by the creation of the Amazon River, which separated the monkeys in the Amazon north of the Amazon River, who then evolved into the gracile capuchins. Those in the Atlantic Forest south of the river evolved into the robust capuchins. Gracile capuchins have longer limbs relative to their body size than robust capuchins, and have rounder skulls, whereas robust capuchins have jaws better adapted for opening hard nuts. Robust capuchins have crests and the males have beards.
Genus Cebus
Colombian white-faced capuchin or Colombian white-headed capuchin, Cebus capucinus
Panamanian white-faced capuchin or Panamanian white-headed capuchin, Cebus imitator
Marañón white-fronted capuchin, Cebus yuracus
Shock-headed capuchin, Cebus cuscinus
Spix's white-fronted capuchin, Cebus unicolor
Humboldt's white-fronted capuchin, Cebus albifrons
Guianan weeper capuchin, Cebus olivaceus
Chestnut weeper capuchin, Cebus castaneus
Ka'apor capuchin, Cebus kaapori
Venezuelan brown capuchin, Cebus brunneus
Sierra de Perijá white-fronted capuchin, Cebus leucocephalus
Río Cesar white-fronted capuchin, Cebus cesare
Varied white-fronted capuchin, Cebus versicolor
Santa Marta white-fronted capuchin, Cebus malitiosus
Ecuadorian white-fronted capuchin, Cebus aequatorialis
Genus Sapajus
Black-capped, brown or tufted capuchin, Sapajus apella
Guiana brown capuchin, Sapajus apella apella
Sapajus apella fatuellus
Large-headed capuchin, Sapajus apella macrocephalus
Margarita Island capuchin, Sapajus apella margaritae
Sapajus apella peruanus
Sapajus apella tocantinus
Blond capuchin, Sapajus flavius*
Black-striped capuchin, Sapajus libidinosus
Sapajus libidinosus juruanus
Sapajus libidinosus libidinosus
Sapajus libidinosus pallidus
Sapajus libidinosus paraguayanus
Azaras's capuchin, Sapajus cay
Black capuchin, Sapajus nigritus
Sapajus nigritus cucullatus
Sapajus nigritus nigritus
Crested capuchin or robust tufted capuchin, Sapajus robustus
Golden-bellied capuchin, Sapajus xanthosternos
* Rediscovered species.
The oldest known crown platyrrhine and member of Cebidae, Panamacebus transitus, is estimated to have lived 21 million years ago. It is the earliest known fossil evidence of a mammal travelling between South and North America.
Physical characteristics
Capuchins are black, brown, buff or whitish, but their exact color and pattern depends on the species involved. Capuchin monkeys are usually dark brown with a cream/off-white coloring around their necks. They reach a length of , with tails that are just as long as the body. On average, they weigh from 1.4 to 4 kg (3 to 9 pounds) and live up to 25 years old in their natural habitats, and up to 35 in captivity.
Habitat and distribution
Capuchins prefer environments that give them access to shelter and easy food, such as low-lying forests, mountain forests, and rain forests. They are particularly abundant in Argentina, Brazil, Costa Rica, Honduras, Paraguay, and Peru. They use these areas for shelter at night and food access during the day. The canopy of the trees allows for protection from threats above, and the capuchin monkeys' innate ability to climb trees with ease allows them to escape and hide from predators on the jungle floor. This environment is mutually beneficial for the capuchins and for the ecosystem in which they inhabit. This is because they spread their seed leftovers and fecal matter across the forest floor which helps new plants to grow, therefore adding to the already abundant foliage that shelters the capuchin.
Behavior
Like most New World monkeys, capuchins are diurnal and arboreal. Capuchins are polygamous, and the females mate throughout the year, but only go through a gestation period once every 2 years between December and April. Females bear young every two years following a 160- to 180-day gestation. The young cling to their mother's chest until they are larger, then they move to her back. Adult male capuchin rarely take part in caring for the young. Juveniles become fully mature within four years for females and eight years for males. In captivity, individuals have reached an age of 50 years, although natural life expectancy is only 15 to 25 years. Capuchins live in groups of 6–40 members, consisting of related females, their offspring, and several males.
Diet
The capuchin monkey feeds on a vast range of food types, and is more varied than other monkeys in the family Cebidae. They are omnivores, and consume a variety of plant parts such as leaves, flower and fruit, seeds, pith, woody tissue, sugarcane, bulb, and exudates, as well as arthropods, molluscs, a variety of vertebrates, and even primates. Recent findings of old stone tools in Capuchin habitats have suggested that recently the Capuchins have switched from small nuts, such as cashews, to larger and harder nuts. Capuchins have also been observed to be particularly good at catching frogs. They are characterized as innovative and extreme foragers because of their ability to acquire sustenance from a wide collection of unlikely food, which may assure their survival in habitats with extreme food limitation. Capuchins living near water will also eat crabs and shellfish by cracking their shells with stones.
Social structure
Capuchin monkeys often live in large groups of 10 to 35 individuals within the forest, although they can easily adapt to places colonized by humans. The Capuchins have discrete hierarchies that are distinguished by age and sex. Usually, a single male will dominate the group, and he will have primary rights to mate with the females of the group. However, the white-headed capuchin groups are led by both an alpha male and an alpha female. Each group will cover a large territory, since members must search for the best areas to feed. These primates are territorial animals, distinctly marking a central area of their territory with urine and defending it against intruders, though outer areas may overlap. The stabilization of group dynamics is served through mutual grooming, and communication occurs between the monkeys through various calls. Their vocal communications have various meanings such as creating contact with one another, warning about a predator, and forming new groups. The social experience of the capuchins directly influences the development of attention in society. They create new social behaviors within multiple groups that signify different types of interactions. These include; tests of friendship, displays against enemies, infant and sexual intimacy. This creates social rituals that are designed to test the strength of social bonds and a reliance on social learning.
Mating
Capuchin females often direct most of their proceptive and mating behavior towards the alpha male. However, when the female reaches the end of her proceptive period, she may sometimes mate with up to six different subordinate males in one day. Strictly targeting the alpha male does not happen every time, as some females have been observed to mate with three to four different males. When an alpha female and a lower-ranking female want to mate with an alpha male, the more dominant female will get rights to the male over the lower-ranking one.
Intelligence
The capuchin is considered to be the most intelligent New World monkey and is often kept in captivity. The tufted monkey is especially noted for its long-term tool usage, one of the few examples of primate tool use other than by apes including humans. Upon seeing macaws eating palm nuts, cracking them open with their beaks, this monkey will select a few of the ripest fruits, nip off the tip of the fruit and drink down the juice, then seemingly discard the rest of the fruit with the nut inside. When these discarded fruits have hardened and become slightly brittle, the capuchin will gather them up again and take them to a large flat boulder where they have previously gathered a few river stones from up to a mile away. They will then use these stones, some of them weighing as much as the monkeys, to crack open the fruit to get to the nut inside. Young capuchins will watch this process to learn from the older, more experienced adults but it takes them 8 years to master this. The learning behavior of capuchins has been demonstrated to be directly linked to a reward rather than curiosity.
In 2005, experiments were conducted on the ability of capuchins to use money. After several months of training, the monkeys began exhibiting behaviors considered to reflect an understanding of the concept of a medium of exchange that were previously believed to be restricted to humans (such as responding rationally to price shocks). They showed the same propensity to avoid perceived losses demonstrated by human subjects and investors.
During the mosquito season, they crush millipedes and rub the result on their backs. This acts as a natural insect repellent.
Self-awareness
When presented with a reflection, capuchin monkeys react in a way that indicates an intermediate state between seeing the mirror as another individual and recognizing the image as self.
Most animals react to seeing their reflections as if encountering another individual they do not recognize. An experiment with capuchins shows that they react to a reflection as a strange phenomenon, but not as if seeing a strange capuchin.
In the experiment, capuchins were presented with three different scenarios:
Seeing an unfamiliar, same-sex monkey on the other side of a clear barrier.
Seeing a familiar, same-sex monkey on the other side of a clear barrier.
A mirror showing a reflection of the monkey.
In scenario 1, females appeared anxious and avoided eye-contact, while males made threatening gestures. In scenario 2, there was little reaction by either males or females.
When presented with a reflection, females gazed into their own eyes and made friendly gestures, such as lip-smacking and swaying. Males made more eye contact than with strangers or familiar monkeys but reacted with signs of confusion or distress, such as squealing, curling up on the floor, or trying to escape from the test room.
Theory of mind
The question of whether capuchin monkeys have a theory of mind—whether they can understand what another creature may know or think—has been neither proven nor disproven conclusively. If confronted with a knower-guesser scenario, where one trainer can be observed to know the location of food and another trainer merely guesses the location of food, capuchin monkeys can learn to rely on the knower. This has, however, been repudiated as conclusive evidence for a theory of mind as the monkeys may have learned to discriminate knower and guess by other means. Until recently it was believed that non-human great apes did not possess a theory of mind either, although recent research indicates this may not be correct. Human children commonly develop a theory of mind around the ages 3 and 4.
Threats
Capuchin monkeys are threatened by deforestation, the pet trade, and humans hunting for bushmeat. According to the IUCN Red List of Threatened Species, nearly all species are decreasing in population, with many facing threats of extinction. Since capuchins have a high reproductive rate and can adapt to different living environments, they can survive forest loss more than some other species; however, habitat fragmentation is still a threat. Predators include jaguars, cougars, jaguarundis, coyotes, tayras, snakes, crocodiles, birds of prey, and humans. The main predator of the tufted capuchin is the harpy eagle, which has been seen bringing several capuchin back to its nest.
Relationship with humans
Easily recognized as the "organ grinder" or "greyhound jockey" monkeys, capuchins are sometimes kept as exotic pets. Sometimes they plunder fields and crops and are seen as troublesome by nearby human populations. In some regions, they have become rare due to the destruction of their habitat.
Capuchins have been used as service animals, and were once referred to as "nature's butlers" by the AARP. Helping Hands, a nonprofit organization, trained capuchin monkeys to assist quadriplegics as monkey helpers in a manner similar to mobility assistance dogs.
In 2010, the U.S. federal government revised its definition of service animal under the Americans with Disabilities Act (ADA). Non-human primates are no longer recognized as service animals under the ADA. The American Veterinary Medical Association does not support the use of nonhuman primates as assistance animals because of animal welfare concerns, the potential for serious injury to people, and risks that primates may transfer dangerous diseases to humans. In 2021, Helping Hands (the organization that provided helper monkeys to disabled persons) rebranded, changing its name to Envisioning Access and replaced the use of monkeys with a focus on new assistive technologies.
Capuchin monkeys are the most common featured monkeys in film and television, with notable examples including: Night at the Museum (and its sequels), Outbreak, Monkey Shines, Pirates of the Caribbean: The Curse of the Black Pearl (and its sequels), Zookeeper, George of the Jungle, and The Hangover Part II. Ross Geller (David Schwimmer) on the NBC sitcom Friends had a capuchin monkey named Marcel. Crystal the Monkey is a famous monkey actress.
| Biology and health sciences | Primates | null |
1239464 | https://en.wikipedia.org/wiki/Squirrel%20monkey | Squirrel monkey | Squirrel monkeys are New World monkeys of the genus Saimiri. Saimiri is the only genus in the subfamily Saimiriinae. The name of the genus is of Tupi origin (sai-mirím or çai-mbirín, with sai meaning 'monkey' and mirím meaning 'small') and was also used as an English name by early researchers.
Squirrel monkeys live in the tropical forests of Central and South America in the canopy layer. Most species have parapatric or allopatric ranges in the Amazon, while S. oerstedii is found disjunctly in Costa Rica and Panama.
There are two main groups of squirrel monkeys recognized. They are differentiated based on the shape of the white coloration above the eyes. In total there are five recognized species. Squirrel monkeys have short and close fur colored black at the shoulders, yellow or orange fur along the back and extremities, and white on the face.
Squirrel monkeys have determined breeding seasons which involve large fluctuations in hormones and there is evidence of sexual dimorphism between males and females.
Squirrel monkeys can only sweat through the palms of their hands and feet. This can have the effect of making their hands and feet feel damp to the touch. Squirrel monkeys must make use of other thermoregulation techniques such as behavioral changes and urine washing. These monkeys live in habitats of high temperatures and high humidity, making it essential for them to maintain proper osmoregulation if conditions pass certain thresholds. Color vision studies have also been performed on squirrel monkeys for the purpose of better understanding vision ailments in humans.
The common squirrel monkey is commonly captured for the pet trade and for medical research, but it is not threatened. Two squirrel monkey species are endangered: the Central American squirrel monkey and the black squirrel monkey are listed as vulnerable by the IUCN.
Evolutionary history
Taxonomy
Until 1984, all South American squirrel monkeys were considered part of a single widespread species, and many zoologists considered the Central American squirrel monkey to be a member of that single species as well. The two main groups currently recognized can be separated by the white above the eyes; it is shaped as a Gothic ("pointed") arch in the S. sciureus group, while it is shaped as a Roman ("rounded") arch in the S. boliviensis group. Mammal Species of the World (2005) recognized five species.
Subsequent taxonomic research has recognized Saimiri sciureus cassiquiarensis as a separate species Saimiri cassiquiarensis, and also recognized an additional species, Collins' squirrel monkey Saimiri collinsi that had previously been considered to be within S. sciureus. Some more recent taxonomies also recognize Saimiri sciureus macrodon as a separate species Saimiri macrodon, but others recognize S. macrodon to be a synonym of Saimiri cassiquiarensis.
Genus Saimiri
S. sciureus group
Central American squirrel monkey, Saimiri oerstedii
Black-crowned Central American squirrel monkey, Saimiri oerstedii oerstedii
Grey-crowned Central American squirrel monkey, Saimiri oerstedii citrinellus
Guianan squirrel monkey, Saimiri sciureus
Saimiri sciureus sciureus
Saimiri sciureus albigena
Ecuadorian squirrel monkey, Saimiri sciureus macrodon
Humboldt's squirrel monkey, Saimiri cassiquiarensis
Bare-eared squirrel monkey, Saimiri ustus
Collins' squirrel monkey, Saimiri collinsi
S. boliviensis group
Black-capped squirrel monkey, Saimiri boliviensis
Bolivian squirrel monkey, Saimiri boliviensis boliviensis
Peruvian squirrel monkey, Saimiri boliviensis peruviensis
Black squirrel monkey, Saimiri vanzolinii
Fossil species
†Saimiri annectens, Honda Group, Kay and Meldrum 1997
†Saimiri fieldsi, Honda Group, Stirton 1951
Evolution
The crown group of the extant squirrel monkeys appears to have diverged around 1.5 million years ago. Diversification of squirrel monkey species appears to have occurred during the Pleistocene Epoch, likely due to climatic changes associated with interglacial periods in South America at the time. The origin of squirrel monkeys in Central America is unclear, but a possible hypothesis is human transport. More genetic work needs to be done on the subject to reveal a true answer.S. boliviensis appears to be the first diverging species in the group. S. oerstedii and S. s. sciureus, are sister species. S. s. macrodon is the sister species to the S. oerstedii / S. s. sciureus clade.
Description
A squirrel monkey's fur is short and close, coloured black at the shoulders and yellowish orange on its back and extremities. The upper parts of their heads are hairy. This black-and-white face gives them the name "death's head monkey" in several Germanic languages (e.g., German , Swedish , Dutch ) and Slovenian ().
Squirrel monkeys grow from long, plus a tail. Male squirrel monkeys weigh . Females weigh . Both males and females are equipped with long and hairy tails, flat nails, and pointed claws.
Female squirrel monkeys have pseudo-penises, which they use to display dominance over smaller monkeys, in much the same way that the male squirrel monkeys display their dominance.
Behaviour, ecology, and physiology
Like most of their New World monkey relatives, squirrel monkeys are diurnal and arboreal. Unlike other New World monkeys, their tail is not used for climbing but as a kind of "balancing pole" and also as a tool. Their movements in the branches can be very rapid.
Squirrel monkeys live together in multi-male/multi-female groups with up to 500 members. These large groups, however, can occasionally break into smaller troupes. The groups have a number of vocal calls, including warning sounds to protect the group from large falcons, which are a natural threat. Their small body size also makes them susceptible to predators such as snakes and felids. For marking territory, squirrel monkeys rub their tail and their skin with their own urine.
Squirrel monkeys are omnivores, eating primarily fruits and insects. Occasionally, they also eat seeds, leaves, flowers, buds, nuts, and eggs.
Reproduction
Squirrel monkey mating is subject to seasonal influences. Squirrel monkeys reach sexual maturity at ages of 2–2.5 years for females and age 3.5–4 years for males. Females give birth to young during the rainy season, after a 150- to 170-day gestation. Only the mothers care for the young. Saimiri oerstedti are weaned by 4 months of age, while S. boliviensis are not fully weaned until 18 months old. Squirrel monkeys live to about 15 years old in the wild, and over 20 years in captivity. Menopause in females probably occurs in the mid-teens. Studies show that Saimiri collinsi time the weaning of their young with the period of time when there will be maximum fruit availability in the environment. This reduces the energetic struggles that newly weaned juveniles will face when transitioning from a milk diet where they are dependent on their mother for food to a more diverse diet where they have to forage for food. There is evidence that squirrel monkeys show sexual dimorphism during the breeding season. In the months leading up to breeding and in the months of breeding, sexually mature adult males have been recorded to increase in size by significant amounts relative to females. These size changes are caused by seasonal fluctuations in androgen hormones synthesized in the hypothalamus, pituitary, adrenal and gonadal axes. The fluctuations include increases in the concentrations of testosterone, androstenedione, and dehydroepiandrosterone levels in sexually mature males during the breeding season, peaking in January. Following the breeding season, these androgen concentrations drop. The evolutionary reasoning for these size changes in sexually mature males is suggested to be both intra-sexual selections among males and also female choice selection, as the larger males are more likely to be preferred by females and partake in more copulations. There is not clear evidence yet as to why females choose larger males, but a leading hypothesis is that the larger males are more likely to have better vigilance for their young.
Thermoregulation
Squirrel monkeys can only sweat through the palms of their hands and the soles of their feet. Sweating in these areas alone does not provide enough cooling for the monkeys to survive in the high temperature environments of South and Central America, requiring them to use other methods to thermoregulate. They will use behavioral tactics such as seeking out shaded areas sheltered from the sun and also make use of postural changes to better dissipate heat from their body. They will also make use of a technique to maximize evaporative cooling known as urine washing. The monkeys will urinate on their hands and rub the urine over the soles of their feet. The urine is then evaporated off the body in a cooling process. Studies have shown this behavior to be maximized during times of high temperature, highlighting its importance as a thermoregulatory behavior.
Osmoregulation
Squirrel monkeys are subject to both high temperatures and high humidity in their natural habitat. The humidity can range from 70% saturation in the 'dry' season up to 90% in the 'wet' season. Squirrel monkeys are able to tolerate up to 75% humidity with small adjustments in behavior and physiology that increase in significance as the humidity goes up. When reaching approximately 95% humidity, the monkeys have more drastic changes in osmoregulation in order to maintain homeostasis. As evaporative water loss decreases at these high levels of saturation, the monkeys will take in less water and create a more concentrated urine in order to maintain proper ion and water levels inside the body.
Cooperation studies
Cooperation is largely evident in human primates. Squirrel monkeys do not often display cooperation in the wild, while many other nonhuman primates do. Studies have been done to suggest that female squirrel monkeys show disadvantageous inequity aversion as it pertains to food rewards. However, the same could not be said for male squirrel monkeys. More studies need to be done on squirrel monkey behavior to determine why squirrel monkeys rarely show cooperation, and whether disadvantageous inequity aversion is a relevant factor.
Colour vision
Colour vision in squirrel monkeys has been extensively studied as a stand-in for human ailments. In humans, two genes for colour vision are found on the X chromosome. Typically, one gene (OPN1LW) produces a pigment that is most sensitive to the 564 nm wavelength, while the other gene (OPN1MW) produces a pigment most sensitive to 534 nm. In squirrel monkeys, there is only one gene on the X chromosome but it exists in three varieties: one is most sensitive to 538 nm, one to 551 nm, and one to 561 nm. Since males have only one X chromosome, they are dichromatic, although with different sensitivities. Females have two X chromosomes, so some of them can have copies of two different alleles. The three alleles seem to be equally common, leading to one-third of females being dichromatic, while two-thirds are trichromatic. Recently, gene therapy has given the human OPN1LW gene to adult male squirrel monkeys, producing behaviour consistent with trichromatic colour vision.
Gallery
| Biology and health sciences | New World monkeys | Animals |
18545292 | https://en.wikipedia.org/wiki/GitHub | GitHub | GitHub () is a proprietary developer platform that allows developers to create, store, manage, and share their code. It uses Git to provide distributed version control and GitHub itself provides access control, bug tracking, software feature requests, task management, continuous integration, and wikis for every project. Headquartered in California, it has been a subsidiary of Microsoft since 2018.
It is commonly used to host open source software development projects. GitHub reported having over 100 million developers and more than 420 million repositories, including at least 28 million public repositories. It is the world's largest source code host Over five billion developer contributions were made to more than 500 million open source projects in 2024.
About
Founding
The development of the GitHub platform began on October 19, 2007. The site was launched in April 2008 by Tom Preston-Werner, Chris Wanstrath, P. J. Hyett and Scott Chacon after it had been available for a few months as a beta release. Its name was chosen as a compound of Git and hub.
Structure of the organization
GitHub, Inc. was originally a flat organization with no middle managers, instead relying on self-management. Employees could choose to work on projects that interested them (open allocation), but the chief executive set salaries.
In 2014, the company added a layer of middle management in response to serious harassment allegations against its senior leadership. As a result of the scandal, Tom Preston-Werner resigned from his position as CEO. Co-founder and Product lead, Chris Wanstrath, became CEO. Julio Avalos, then General Counsel and Administrative Officer, assumed control over GitHub's business operations and day-to-day management.
Finance
GitHub was a bootstrapped start-up business, which in its first years provided enough revenue to be funded solely by its three founders and start taking on employees.
In July 2012, four years after the company was founded, Andreessen Horowitz invested $100 million in venture capital with a $750 million valuation.
In July 2015 GitHub raised another $250 million (~$ in ) of venture capital in a series B round. The lead investor was Sequoia Capital, and other investors were Andreessen Horowitz, Thrive Capital, IVP (Institutional Venture Partners) and other venture capital funds. The company was then valued at approximately $2 billion.
GitHub was estimated to generate $1 billion in revenue.
History
The GitHub service was developed by Chris Wanstrath, P. J. Hyett, Tom Preston-Werner, and Scott Chacon using Ruby on Rails, and started in February 2008. The company, GitHub, Inc., was formed in 2007 and is located in San Francisco.
On February 24, 2009, GitHub announced that within the first year of being online, GitHub had accumulated over 46,000 public repositories, 17,000 of which were formed in the previous month. At that time, about 6,200 repositories had been forked at least once, and 4,600 had been merged.
That same year, the site was used by over 100,000 users, according to GitHub, and had grown to host 90,000 unique public repositories, 12,000 having been forked at least once, for a total of 135,000 repositories.
In 2010, GitHub was hosting 1 million repositories. A year later, this number doubled. ReadWriteWeb reported that GitHub had surpassed SourceForge and Google Code in total number of commits for the period of January to May 2011. On January 16, 2013, GitHub passed the 3 million users mark and was then hosting more than 5 million repositories. By the end of the year, the number of repositories was twice as great, reaching 10 million repositories.
In 2015, GitHub opened an office in Japan, its first outside of the U.S.
On February 28, 2018, GitHub fell victim to the third-largest distributed denial-of-service (DDoS) attack in history, with incoming traffic reaching a peak of about 1.35 terabits per second.
On June 19, 2018, GitHub expanded its GitHub Education by offering free education bundles to all schools.
Acquisition by Microsoft
From 2012, Microsoft became a significant user of GitHub, using it to host open-source projects and development tools such as .NET Core, Chakra Core, MSBuild, PowerShell, PowerToys, Visual Studio Code, Windows Calculator, Windows Terminal and the bulk of its product documentation (now to be found on Microsoft Docs).
On June 4, 2018, Microsoft announced its intent to acquire GitHub for US$7.5 billion (~$ in ). The deal closed on October 26, 2018. GitHub continued to operate independently as a community, platform and business. Under Microsoft, the service was led by Xamarin's Nat Friedman, reporting to Scott Guthrie, executive vice president of Microsoft Cloud and AI. Nat Friedman resigned November 3, 2021; he was replaced by Thomas Dohmke.
There have been concerns from developers Kyle Simpson, JavaScript trainer and author, and Rafael Laguna, CEO at Open-Xchange over Microsoft's purchase, citing uneasiness over Microsoft's handling of previous acquisitions, such as Nokia's mobile business and Skype.
This acquisition was in line with Microsoft's business strategy under CEO Satya Nadella, which has seen a larger focus on cloud computing services, alongside the development of and contributions to open-source software. Harvard Business Review argued that Microsoft was intending to acquire GitHub to get access to its user base, so it can be used as a loss leader to encourage the use of its other development products and services.
Concerns over the sale bolstered interest in competitors: Bitbucket (owned by Atlassian), GitLab and SourceForge (owned by BIZX, LLC) reported that they had seen spikes in new users intending to migrate projects from GitHub to their respective services.
In September 2019, GitHub acquired Semmle, a code analysis tool. In February 2020, GitHub launched in India under the name GitHub India Private Limited. In March 2020, GitHub announced that it was acquiring npm, a JavaScript packaging vendor, for an undisclosed sum of money. The deal was closed on April 15, 2020.
In early July 2020, the GitHub Archive Program was established to archive its open-source code in perpetuity.
Mascot
GitHub's mascot is an anthropomorphized "octocat" with five octopus-like arms. The character was created by graphic designer Simon Oxley as clip art to sell on iStock, a website that enables designers to market royalty-free digital images. The illustration GitHub chose was a character that Oxley had named Octopuss. Since GitHub wanted Octopuss for their logo (a use that the iStock license disallows), they negotiated with Oxley to buy exclusive rights to the image.
GitHub renamed Octopuss to Octocat, and trademarked the character along with the new name. Later, GitHub hired illustrator Cameron McEfee to adapt Octocat for different purposes on the website and promotional materials; McEfee and various GitHub users have since created hundreds of variations of the character, which are available on The Octodex.
Services
Projects on GitHub can be accessed and managed using the standard Git command-line interface; all standard Git commands work with it. GitHub also allows users to browse public repositories on the site. Multiple desktop clients and Git plugins are also available. In addition, the site provides social networking-like functions such as feeds, followers, wikis (using wiki software called Gollum), and a social network graph to display how developers work on their versions ("forks") of a repository and what fork (and branch within that fork) is newest.
Anyone can browse and download public repositories, but only registered users can contribute content to repositories. With a registered user account, users can have discussions, manage repositories, submit contributions to others' repositories, and review changes to code. GitHub began offering limited private repositories at no cost in January 2019 (limited to three contributors per project). Previously, only public repositories were free. On April 14, 2020, GitHub made "all of the core GitHub features" free for everyone, including "private repositories with unlimited collaborators."
The fundamental software that underpins GitHub is Git itself, written by Linus Torvalds, creator of Linux. The additional software that provides the GitHub user interface was written using Ruby on Rails and Erlang by GitHub, Inc. developers Wanstrath, Hyett, and Preston-Werner.
Scope
The primary purpose of GitHub is to facilitate the version control and issue tracking aspects of software development. Labels, milestones, responsibility assignment, and a search engine are available for issue tracking. For version control, Git (and, by extension, GitHub) allows pull requests to propose changes to the source code. Users who can review the proposed changes can see a diff between the requested changes and approve them. In Git terminology, this action is called "committing" and one instance of it is a "commit." A history of all commits is kept and can be viewed at a later time.
In addition, GitHub supports the following formats and features:
Documentation, including automatically rendered README files in a variety of Markdown-like file formats (see )
Wikis, with some repositories consisting solely of wiki content. These include curated lists of recommended software which have become known as awesome lists.
GitHub Actions, which allows building continuous integration and continuous deployment pipelines for testing, releasing and deploying software without the use of third-party websites/platforms
GitHub Codespaces, an online IDE providing users with a virtual machine intended to be a work environment to build and test code
Graphs: pulse, contributors, commits, code frequency, punch card, network, members
Integrations Directory
Email notifications
Discussions
Option to subscribe someone to notifications by @ mentioning them.
Emojis
Nested task-lists within files
Visualization of geospatial data
3D render files can be previewed using an integrated STL file viewer that displays the files on a "3D canvas." The viewer is powered by WebGL and Three.js.
Support for previewing many common image formats, including Photoshop's PSD files
PDF document viewer
Security Alerts of known Common Vulnerabilities and Exposures in different packages
GitHub's Terms of Service do not require public software projects hosted on GitHub to meet the Open Source Definition. The terms of service state, "By setting your repositories to be viewed publicly, you agree to allow others to view and fork your repositories."
GitHub Enterprise
GitHub Enterprise is a self-managed version of GitHub with similar functionality. It can be run on an organization's hardware or a cloud provider and has been available In November 2020, source code for GitHub Enterprise Server was leaked online in an apparent protest against DMCA takedown of youtube-dl. According to GitHub, the source code came from GitHub accidentally sharing the code with Enterprise customers themselves, not from an attack on GitHub servers.
GitHub Pages
In 2008, GitHub introduced GitHub Pages, a static web hosting service for blogs, project documentation, and books. All GitHub Pages content is stored in a Git repository as files served to visitors verbatim or in Markdown format. GitHub is integrated with Jekyll static website and blog generator and GitHub continuous integration pipelines. Each time the content source is updated, Jekyll regenerates the website and automatically serves it via GitHub Pages infrastructure.
Like the rest of GitHub, it includes free and paid service tiers. Websites generated through this service are hosted either as subdomains of the github.io domain or can be connected to custom domains bought through a third-party domain name registrar. GitHub Pages supports HTTPS encryption.
Gist
GitHub also operates a pastebin-style site called Gist, which is for code snippets, as opposed to GitHub proper, which is usually used for larger projects. Tom Preston-Werner débuted the feature at a Ruby conference in 2008.
Gist builds on the traditional simple concept of a pastebin by adding version control for code snippets, easy forking, and TLS encryption for private pastes. Because each "gist" is its own Git repository, multiple code snippets can be contained in a single page, and they can be pushed and pulled using Git.
Unregistered users could upload Gists until March 19, 2018, when uploading Gists was restricted to logged-in users, reportedly to mitigate spamming on the page of recent Gists.
Gists' URLs use hexadecimal IDs, and edits to Gists are recorded in a revision history, which can show the text difference of thirty revisions per page with an option between a "split" and "unified" view. Like repositories, Gists can be forked, "starred", i.e., publicly bookmarked, and commented on. The count of revisions, stars, and forks is indicated on the gist page.
Education program
GitHub launched a new program called the GitHub Student Developer Pack to give students free access to more than a dozen popular development tools and services. GitHub partnered with Bitnami, Crowdflower, DigitalOcean, DNSimple, HackHands, Namecheap, Orchestrate, Screenhero, SendGrid, Stripe, Travis CI, and Unreal Engine to launch the program.
In 2016, GitHub announced the launch of the GitHub Campus Experts program to train and encourage students to grow technology communities at their universities. The Campus Experts program is open to university students 18 years and older worldwide. GitHub Campus Experts are one of the primary ways that GitHub funds student-oriented events and communities, Campus Experts are given access to training, funding, and additional resources to run events and grow their communities. To become a Campus Expert, applicants must complete an online training course with multiple modules to develop community leadership skills.
GitHub Marketplace service
GitHub also provides some software as a service (SaaS) integrations for adding extra features to projects. Those services include:
Waffle.io: project management for software teams, which allows users to automatically see pull requests, automated builds, reviews, and deployments across repositories.
Rollbar: provides real-time debugging tools and full-stack exception reporting.
Codebeat: automated code analysis for web and mobile developers.
Travis CI: continuous integration service.
GitLocalize: provides utilities to manage project translation and internationalisation.
GitHub Sponsors
GitHub Sponsors allows users to make monthly money donations to projects hosted on GitHub. The public beta was announced on May 23, 2019, and the project accepts waitlist registrations. The Verge said that GitHub Sponsors "works exactly like Patreon" because "developers can offer various funding tiers that come with different perks, and they'll receive recurring payments from supporters who want to access them and encourage their work" except with "zero fees to use the program." Furthermore, GitHub offers incentives for early adopters during the first year: it pledges to cover payment processing costs and match sponsorship payments up to $5,000 per developer. Furthermore, users can still use similar services like Patreon and Open Collective and link to their websites.
GitHub Archive Program
In July 2020, GitHub stored a February archive of the site in an abandoned mountain mine in Svalbard, Norway, part of the Arctic World Archive and not far from the Svalbard Global Seed Vault. The archive contained the code of all active public repositories, as well as that of dormant but significant public repositories. The 21TB of data was stored on piqlFilm archival film reels as matrix (2D) barcode (Boxing barcode), and is expected to last 500–1,000 years.
The GitHub Archive Program is also working with partners on Project Silica, in an attempt to store all public repositories for 10,000 years. It aims to write archives into the molecular structure of quartz glass platters, using a high-precision petahertz pulse laser, i.e. one that pulses a quadrillion (1,000,000,000,000,000) times per second.
Controversies
Harassment allegations
In March 2014, GitHub programmer Julie Ann Horvath alleged that founder and CEO Tom Preston-Werner and his wife, Theresa, engaged in a pattern of harassment against her that led to her leaving the company. In April 2014, GitHub released a statement denying Horvath's allegations. However, following an internal investigation, GitHub confirmed the claims. GitHub's CEO Chris Wanstrath wrote on the company blog, "The investigation found Tom Preston-Werner in his capacity as GitHub's CEO acted inappropriately, including confrontational conduct, disregard of workplace complaints, insensitivity to the impact of his spouse's presence in the workplace, and failure to enforce an agreement that his spouse should not work in the office." Preston-Werner subsequently resigned from the company. The firm then announced it would implement new initiatives and trainings "to make sure employee concerns and conflicts are taken seriously and dealt with appropriately."
Sanctions
On July 25, 2019, a developer based in Iran wrote on Medium that GitHub had blocked his private repositories and prohibited access to GitHub pages. Soon after, GitHub confirmed that it was now blocking developers in Iran, Crimea, Cuba, North Korea, and Syria from accessing private repositories. However, GitHub reopened access to GitHub Pages days later, for public repositories regardless of location. It was also revealed that using GitHub while visiting sanctioned countries could result in similar actions occurring on a user's account. GitHub responded to complaints and the media through a spokesperson, saying:
GitHub is subject to US trade control laws, and is committed to full compliance with applicable law. At the same time, GitHub's vision is to be the global platform for developer collaboration, no matter where developers reside. As a result, we take seriously our responsibility to examine government mandates thoroughly to be certain that users and customers are not impacted beyond what is required by law. This includes keeping public repositories services, including those for open source projects, available and accessible to support personal communications involving developers in sanctioned regions.
Developers who feel that they should not have restrictions can appeal for the removal of said restrictions, including those who only travel to, and do not reside in, those countries. GitHub has forbidden the use of VPNs and IP proxies to access the site from sanctioned countries, as purchase history and IP addresses are how they flag users, among other sources.
Censorship
On December 4, 2014, Russia blacklisted GitHub.com because GitHub initially refused to take down user-posted suicide manuals. After a day, Russia withdrew its block, and GitHub began blocking specific content and pages in Russia. On December 31, 2014, India blocked GitHub.com along with 31 other websites over pro-ISIS content posted by users; the block was lifted three days later. On October 8, 2016, Turkey blocked GitHub to prevent email leakage of a hacked account belonging to the country's energy minister.
On March 26, 2015, a large-scale DDoS attack was launched against GitHub.com that lasted for just under five days. The attack, which appeared to originate from China, primarily targeted GitHub-hosted user content describing methods of circumventing Internet censorship.
On April 19, 2020, Chinese police detained Chen Mei and Cai Wei (volunteers for Terminus 2049, a project hosted on GitHub), and accused them of "picking quarrels and provoking trouble." Cai and Chen archived news articles, interviews, and other materials published on Chinese media outlets and social media platforms that have been removed by censors in China.
ICE contract
GitHub has a $200,000 contract with U.S. Immigration and Customs Enforcement (ICE) for the use of their on-site product GitHub Enterprise Server. This contract was renewed in 2019, despite internal opposition from many GitHub employees. In an email sent to employees, later posted to the GitHub blog on October 9, 2019, CEO Nat Friedman stated, "The revenue from the purchase is less than $200,000 and not financially material for our company." He announced that GitHub had pledged to donate $500,000 to "nonprofit groups supporting immigrant communities targeted by the current administration." In response, at least 150 GitHub employees signed an open letter re-stating their opposition to the contract, and denouncing alleged human rights abuses by ICE. five workers had resigned over the contract.
The ICE contract dispute came into focus again in June 2020 due to the company's decision to abandon "master/slave" branch terminology, spurred by the George Floyd protests and Black Lives Matter movement. Detractors of GitHub describe the branch renaming to be a form of performative activism and have urged GitHub to cancel their ICE contract instead. An open letter from members of the open source community was shared on GitHub in December 2019, demanding that the company drop its contract with ICE and provide more transparency into how they conduct business and partnerships. The letter has been signed by more than 700 people.
Capitol riot comments and employee firing
In January 2021, GitHub fired one of its employees after he expressed concern for colleagues following the January 6 United States Capitol attack, calling some of the rioters "Nazis". After an investigation, GitHub's COO said there were "significant errors of judgment and procedure" with the company's decision to fire the employee. As a result of the investigation, GitHub reached out to the employee, and the company's head of human resources resigned.
Twitter source code leak
In 2023, parts of the social media platform Twitter were uploaded onto GitHub. The leak was first reported by the New York Times and was part of a legal filing Twitter submitted to the United States District Court for the Northern District of California. Twitter claimed that the postings infringed on copyright property owned by them, and asked the court for information to identify the user who posted the source code to GitHub, under the username "FreeSpeechEnthusiast".
Reception
Linus Torvalds, the original developer of the Git software, has highly praised GitHub by stating "The hosting of github is excellent. They've done a good job on that. I think GitHub should be commended enormously for making open source project hosting so easy." However, he also sharply criticized the implementation of GitHub's merging interface, stating that "Git comes with a nice pull-request generation module, but GitHub instead decided to replace it with their own totally inferior version. As a result, I consider GitHub useless for these kinds of things. It's fine for hosting, but the pull requests and the online commit editing, are just pure garbage."
| Technology | Utility | null |
3518068 | https://en.wikipedia.org/wiki/Complex%20post-traumatic%20stress%20disorder | Complex post-traumatic stress disorder | Complex post-traumatic stress disorder (CPTSD, cPTSD, or hyphenated C-PTSD) is a stress-related mental and behavioral disorder generally occurring in response to complex traumas (i.e., commonly prolonged or repetitive exposures to a series of traumatic events, from which one sees little or no chance to escape).
In the ICD-11 classification, C-PTSD is a category of post-traumatic stress disorder (PTSD) with three additional clusters of significant symptoms: emotional dysregulation, negative self-beliefs (e.g., shame, guilt, failure for wrong reasons), and interpersonal difficulties. C-PTSD's symptoms include prolonged feelings of terror, worthlessness, helplessness, distortions in identity or sense of self, and hypervigilance. Although early descriptions of CPTSD specified the type of trauma (i.e. prolonged, repetitive), in the ICD-11 there is no requirement of a specific trauma type.
Classifications
The World Health Organization (WHO)'s International Statistical Classification of Diseases has included C-PTSD since its eleventh revision that was published in 2018 and came into effect in 2022 (ICD-11). The previous edition (ICD-10) proposed a diagnosis of Enduring Personality Change after Catastrophic Event (EPCACE), which was an ancestor of C-PTSD. Healthdirect Australia (HDA) and the British National Health Service (NHS) have also acknowledged C-PTSD as mental disorder. However, the American Psychiatric Association (APA) has not included C-PTSD in the Diagnostic and Statistical Manual of Mental Disorders. The related disorder, Disorders of Extreme Stress – not otherwise specified (DESNOS) was studied for inclusion in the DSM-IV, but not ultimately included. Instead, the symptoms of PTSD were expanded in the DSM-IV and then DSM-5 to better capture the range of symptoms that can follow from all types of trauma.
Signs and symptoms
Children and adolescents
The diagnosis of PTSD was originally given to adults who had suffered because of a trauma (e.g., during a war, rape). However, the situation for many children is quite different. Children can suffer chronic trauma such as maltreatment, family violence, dysfunction, or a disruption in attachment to their primary caregiver. In many cases, it is the child's caregiver who causes the trauma. The diagnosis of PTSD does not take into account how the developmental stages of children may affect their symptoms and how trauma can affect a child's development.
The term developmental trauma disorder (DTD) has been proposed as the childhood equivalent of C-PTSD. This developmental form of trauma places children at risk for developing psychiatric and medical disorders. Bessel van der Kolk explains DTD as numerous encounters with interpersonal trauma such as physical assault, sexual assault, violence or death. It can also be brought on by subjective events such as abandonment, betrayal, defeat or shame.
Repeated traumatization during childhood leads to symptoms that differ from those described for PTSD. Cook and others describe symptoms and behavioral characteristics in seven domains:
Attachmentproblems with relationship boundaries, lack of trust, social isolation, difficulty perceiving and responding to others' emotional states
Biomedical symptomssensory-motor developmental dysfunction, sensory-integration difficulties; increased medical problems or even somatization
Affect or emotional regulationpoor affect regulation, difficulty identifying and expressing emotions and internal states, and difficulties communicating needs, wants, and wishes
Elements of dissociationamnesia, depersonalization, discrete states of consciousness with discrete memories, affect, and functioning, and impaired memory for state-based events
Behavioral controlproblems with impulse control, aggression, pathological self-soothing, and sleep problems
Cognitiondifficulty regulating attention; problems with a variety of executive functions such as planning, judgment, initiation, use of materials, and self-monitoring; difficulty processing new information; difficulty focusing and completing tasks; poor object constancy; problems with cause-effect thinking; and language developmental problems such as a gap between receptive and expressive communication abilities.
Self-conceptfragmented and/or disconnected autobiographical narrative, disturbed body image, low self-esteem, excessive shame, and negative internal working models of self.
Adults
Adults with C-PTSD have sometimes experienced prolonged interpersonal traumatization beginning in childhood, rather than, or as well as, in adulthood. These early injuries interrupt the development of a robust sense of self and of others. Because physical and emotional pain or neglect was often inflicted by attachment figures such as caregivers or other siblings, these individuals may develop a sense that they are fundamentally flawed and that others cannot be relied upon.
Earlier descriptions of CPTSD suggested six clusters of symptoms:
Alterations in regulation of affect and impulses
Alterations in attention or consciousness
Alterations in self-perception
Alterations in relations with others
Somatization
Alterations in systems of meaning
Experiences in these areas may include:
Changes in emotional regulation, including experiences such as persistent dysphoria, chronic suicidal preoccupation, self-injury, explosive or extremely inhibited anger (may alternate), and compulsive or extremely inhibited sexuality (may alternate).
Variations in consciousness, such as amnesia or improved recall for traumatic events, episodes of dissociation, depersonalization/derealization, and reliving experiences (either in the form of intrusive PTSD symptoms or in ruminative preoccupation).
Changes in self-perception, such as a sense of helplessness or paralysis of initiative, shame, guilt and self-blame, a sense of defilement or stigma, and a sense of being completely different from other human beings (may include a sense of specialness, utter aloneness, a belief that no other person can understand, or a feeling of nonhuman identity).
Varied changes in perception of the perpetrators, such as a preoccupation with the relationship with a perpetrator (including a preoccupation with revenge), an unrealistic attribution of total power to a perpetrator (though the individual's assessment may be more realistic than the clinician's), idealization or paradoxical gratitude, a sense of a special or supernatural relationship with a perpetrator, and acceptance of a perpetrator's belief system or rationalizations.
Alterations in relations with others, such as isolation and withdrawal, disruption in intimate relationships, a repeated search for a rescuer (may alternate with isolation and withdrawal), persistent distrust, and repeated failures of self-protection.
Changes in systems of meaning, such as a loss of sustaining faith and a sense of hopelessness and despair.
Diagnosis
C-PTSD was considered for inclusion in the DSM-IV but was excluded from the 1994 publication. It was also excluded from the DSM-5, which lists post-traumatic stress disorder. The ICD-11 has included C-PTSD since its initial publication in 2018 and a validated self-report measure exists for assessing the ICD-11 C-PTSD, which is the International Trauma Questionnaire (ITQ).
Differential diagnosis
Post-traumatic stress disorder
In the ICD-11, there are two paired diagnoses, PTSD and CPTSD. A person can only be diagnosed with one or the other. A diagnosis of PTSD is made if a person has experienced a trauma and also experiences 1) re-experiencing the event in the form of intrusive memories, nightmares, or flashbacks, 2) avoidance of memories of the event or of people, places, and situations that remind them of it, and 3) perceptions of heightened current threat (e.g., hypervigilance, enhanced startle reaction). These symptoms must cause impairment in important areas of functioning.
In contrast, a diagnosis of CPTSD is made if the person meets all of the above criteria in addition to 1) difficulties in regulating emotions, 2) changes in beliefs about oneself such as feeling worthless with significant shame, and 3) difficulties in maintaining close relationships with important people. Again, these symptoms must cause significant impairment to be considered CPTSD.
In the DSM-5, many of the symptoms of complex PTSD are now captured in the symptoms of PTSD, which are much broader than the PTSD symptoms in the ICD-11. Moreover, the DSM-5 also includes a dissociative symptom subtype.
Earlier descriptions of CPTSD were broader but may no longer apply clinically:For instance, CPTSD was described to include captivity, psychological fragmentation, the loss of a sense of safety, trust, and self-worth, as well as the tendency to be revictimized. Most importantly, there is a loss of a coherent sense of self: this loss, and the ensuing symptom profile, most pointedly differentiates C-PTSD from PTSD. C-PTSD has also been characterized by attachment disorder, particularly the pervasive insecure, or disorganized-type attachment. Thus, a differentiation between the diagnostic category of C-PTSD and that of PTSD has been suggested.
Continuous traumatic stress disorder (CTSD), which was introduced into the trauma literature by Gill Straker in 1987, differs from C-PTSD. It was originally used by South African clinicians to describe the effects of exposure to frequent, high levels of violence usually associated with civil conflict and political repression. The term is applicable to the effects of exposure to contexts in which gang violence and crime are endemic as well as to the effects of ongoing exposure to life threats in high-risk occupations such as police, fire and emergency services. It has also been used to describe ongoing relationship trauma frequently experienced by people leaving relationships which involved intimate partner violence.
Traumatic grief
Traumatic grief or complicated mourning are conditions where trauma and grief coincide. There are conceptual links between trauma and bereavement since loss of a loved one is inherently traumatic. If a traumatic event was life-threatening, but did not result in a death, then it is more likely that the survivor will experience post-traumatic stress symptoms. If a person dies, and the survivor was close to the person who died, then it is more likely that symptoms of grief will also develop. When the death is of a loved one, and was sudden or violent, then both symptoms often coincide. This is likely in children exposed to community violence.
For C-PTSD to manifest traumatic grief, the violence would occur under conditions of captivity, loss of control and disempowerment, coinciding with the death of a friend or loved one in life-threatening circumstances. This again is most likely for children and stepchildren who experience prolonged domestic or chronic community violence that ultimately results in the death of friends and loved ones. The phenomenon of the increased risk of violence and death of stepchildren is referred to as the Cinderella effect.
Borderline personality disorder
C-PTSD may share some symptoms with both PTSD and borderline personality disorder (BPD). However, there is enough evidence to also differentiate C-PTSD from borderline personality disorder.
It may help to understand the intersection of attachment theory with C-PTSD and BPD if one reads the following opinion of Bessel A. van der Kolk together with an understanding drawn from a description of BPD:
25% of those diagnosed with BPD have no known history of childhood neglect or abuse and individuals are six times as likely to develop BPD if they have a relative who was diagnosed as such compared to those who do not. One conclusion is that there is a genetic predisposition to BPD unrelated to trauma. Researchers conducting a longitudinal investigation of identical twins found that "genetic factors play a major role in individual differences of borderline personality disorder features in Western society." A 2014 study published in the European Journal of Psychotraumatology was able to compare and contrast C-PTSD, PTSD, and borderline personality disorder and found that it could distinguish between individual cases of each and when it was co-morbid, arguing for a case of separate diagnoses for each. BPD may be confused with C-PTSD by some without proper knowledge of the two conditions because those with BPD also tend to have PTSD or to have some history of trauma.
In Trauma and Recovery, Herman expresses the additional concern that patients with C-PTSD frequently risk being misunderstood as inherently 'dependent', 'masochistic', or 'self-defeating', comparing this attitude to the historical misdiagnosis of female hysteria. However, those who develop C-PTSD do so as a result of the intensity of the traumatic bond — in which someone becomes tightly biochemically bound to someone who abuses them and the responses they learned to survive, navigate and deal with the abuse they suffered then become automatic responses, embedded in their personality over the years of trauma — a normal reaction to an abnormal situation.
Treatment
While standard evidence-based treatments may be effective for treating post-traumatic stress disorder, treating complex PTSD often involves addressing interpersonal relational difficulties and a different set of symptoms which make it more challenging to treat.
Children
The utility of PTSD-derived psychotherapies for assisting children with C-PTSD is uncertain. This area of diagnosis and treatment calls for caution in use of the category C-PTSD. Julian Ford and Bessel van der Kolk have suggested that C-PTSD may not be as useful a category for diagnosis and treatment of children as a proposed category of developmental trauma disorder (DTD). According to Courtois and Ford, for DTD to be diagnosed it requires a
Since C-PTSD or DTD in children is often caused by chronic maltreatment, neglect or abuse in a care-giving relationship the first element of the biopsychosocial system to address is that relationship. This invariably involves some sort of child protection agency. This both widens the range of support that can be given to the child but also the complexity of the situation, since the agency's statutory legal obligations may then need to be enforced.
A number of practical, therapeutic and ethical principles for assessment and intervention have been developed and explored in the field:
Identifying and addressing threats to the child's or family's safety and stability are the first priority.
A relational bridge must be developed to engage, retain and maximize the benefit for the child and caregiver.
Diagnosis, treatment planning and outcome monitoring are always relational (and) strengths based.
All phases of treatment should aim to enhance self-regulation competencies.
Determining with whom, when and how to address traumatic memories.
Preventing and managing relational discontinuities and psychosocial crises.
Adults
Trauma recovery model
Judith Lewis Herman, in her book, Trauma and Recovery, proposed a complex trauma recovery model that occurs in three stages:
Establishing safety
Remembrance and mourning for what was lost
Reconnecting with community and more broadly, society
Herman believes recovery can only occur within a healing relationship and only if the survivor is empowered by that relationship. This healing relationship need not be romantic or sexual in the colloquial sense of "relationship", however, and can also include relationships with friends, co-workers, one's relatives or children, and the therapeutic relationship. However, the first stage of establishing safety must always include a thorough evaluation of the surroundings, which might include abusive relationships. This stage might involve the need for major life changes for some patients.
It has been suggested that treatment for complex PTSD should differ from treatment for PTSD by focusing on problems that cause more functional impairment than the PTSD symptoms. These problems include emotional dysregulation, dissociation, and interpersonal problems. Six suggested core components of complex trauma treatment include:
Safety
Self-regulation
Self-reflective information processing
Traumatic experiences integration
Relational engagement
Positive affect enhancement
The above components can be conceptualized as a model with three phases. Not every case will be the same, but the first phase will emphasize the acquisition and strengthening of adequate coping strategies as well as addressing safety issues and concerns. The next phase would focus on decreasing avoidance of traumatic stimuli and applying coping skills learned in phase one. The care provider may also begin challenging assumptions about the trauma and introducing alternative narratives about the trauma. The final phase would consist of solidifying what has previously been learned and transferring these strategies to future stressful events.
Neuroscientific and trauma informed interventions
In practice, the forms of treatment and intervention varies from individual to individual since there is a wide spectrum of childhood experiences of developmental trauma and symptomatology and not all survivors respond positively, uniformly, to the same treatment. Therefore, treatment is generally tailored to the individual. Recent neuroscientific research has shed some light on the impact that severe childhood abuse and neglect (trauma) has on a child's developing brain, specifically as it relates to the development in brain structures, function and connectivity among children from infancy to adulthood. This understanding of the neurophysiological underpinning of complex trauma phenomena is what currently is referred to in the field of traumatology as 'trauma informed' which has become the rationale which has influenced the development of new treatments specifically targeting those with childhood developmental trauma. Martin Teicher, a Harvard psychiatrist and researcher, has suggested that the development of specific complex trauma related symptomatology (and in fact the development of many adult onset psychopathologies) may be connected to gender differences and at what stage of childhood development trauma, abuse or neglect occurred. For example, it is well established that the development of dissociative identity disorder among women is often associated with early childhood sexual abuse.
Use of evidence-based PTSD treatment
Cognitive behavioral therapy, prolonged exposure therapy and dialectical behavioral therapy are well established forms of evidence-based intervention. These treatments are approved and endorsed by the American Psychiatric Association, the American Psychological Association and the Veteran's Administration. There is a question as to whether these PTSD treatments can also treat CPTSD. Given that the ICD-11 CPTSD diagnosis is relatively young, it will be years before this is adequately studied. However, some preliminary studies have examined whether PTSD treatments work equally well in those with PTSD or CPTSD. Two different studies of phase-based PTSD treatment found that both standard PTSD treatment and phased treatment worked equally well whether participants had a diagnosis of PTSD or CPTSD (per the ITQ). Another study of an existing European intensive trauma treatment combining Prolonged Exposure and EMDR found that people with PTSD and CPTSD had comparable decreases in PTSD and CPTSD (though they had more severe PTSD at baseline).
One of the current challenges faced by many survivors of complex trauma (or developmental trauma disorder) is support for treatment since many of the current therapies are relatively expensive and not all forms of therapy or intervention are reimbursed by insurance companies who use evidence-based practice as a criterion for reimbursement.
Treatment challenges
It is widely acknowledged by those who work in the trauma field that there is no one single, standard, 'one size fits all' treatment for complex PTSD. There is also no clear consensus regarding the best treatment among the greater mental health professional community which included clinical psychologists, social workers, licensed therapists (MFTs) and psychiatrists. Although most trauma neuroscientifically informed practitioners understand the importance of utilizing a combination of both 'top down' and 'bottom up' interventions as well as including somatic interventions (sensorimotor psychotherapy or somatic experiencing or yoga) for the purposes of processing and integrating trauma memories.
Survivors with complex trauma often struggle to find a mental health professional who is properly trained in trauma informed practices. They can also be challenging to receive adequate treatment and services to treat a mental health condition which is not universally recognized or well understood by general practitioners.
Allistair and Hull echo the sentiment of many other trauma neuroscience researchers (including Bessel van der Kolk and Bruce D. Perry) who argue:
Complex post-traumatic stress disorder is a long term mental health condition which often requires treatment by highly skilled mental health professionals who specialize in trauma informed modalities designed to process and integrate childhood trauma memories for the purposes of mitigating symptoms and improving the survivor's quality of life. Delaying therapy for people with complex PTSD, whether intentionally or not, can exacerbate the condition.
Recommended treatment modalities and interventions
There is no one treatment which has been designed specifically for use with the adult complex PTSD population (with the exception of component based psychotherapy) there are many therapeutic interventions used by mental health professionals to treat PTSD. , the American Psychological Association PTSD Guideline Development Panel (GDP) strongly recommends the following for the treatment of PTSD:
Cognitive behavioral therapy (CBT) and trauma-focused CBT
Cognitive processing therapy (CPT)
Cognitive therapy (CT)
Prolonged exposure therapy (PE)
The American Psychological Association also conditionally recommends
Brief eclectic psychotherapy (BEP)
Eye movement desensitization and reprocessing (EMDR)
Narrative exposure therapy (NET)
While these treatments have been recommended, there is still a lack of research on the best and most efficacious treatments for complex PTSD. Psychological therapies such as cognitive behavioural therapy, eye movement desensitisation and reprocessing therapy are effective in treating C-PTSD symptoms like PTSD, depression and anxiety. For example, in a 2016, meta-analysis, four out of eight EMDR studies resulted in statistical significance, indicating the potential effectiveness of EMDR in treating certain conditions. Additionally, subjects from two of the studies continued to benefit from the treatment months later. Seven of the studies that employed psychometric tests showed that EMDR led to a reduction in depression symptoms compared to those in the placebo group. Mindfulness and relaxation is effective for PTSD symptoms, emotion regulation and interpersonal problems for people whose complex trauma is related to sexual abuse.
Many commonly used treatments are considered complementary or alternative since there still is a lack of research to classify these approaches as evidence based. Some of these additional interventions and modalities include:
biofeedback
dyadic resourcing (used with EMDR)
emotionally focused therapy
equine-assisted therapy
expressive arts therapy
internal family systems therapy
dialectical behavior therapy (DBT)
family systems therapy
group therapy
neurofeedback
psychodynamic therapy
sensorimotor psychotherapy
somatic experiencing
yoga, specifically trauma-sensitive yoga
History
Judith Lewis Herman of Harvard University was the first psychiatrist and scholar to conceptualise complex post-traumatic stress disorder (C-PTSD) as a (new) mental health condition in 1992, within her book Trauma & Recovery and an accompanying article. In 1988, Herman suggested that a new diagnosis of complex post-traumatic stress disorder (C-PTSD) was needed to describe the symptoms and psychological and emotional effects of long-term trauma. Over the years, the definition of CPTSD has shifted (including a proposal for DESNOS in DSM-IV and a diagnosis of EPCACE in ICD-10), with a different definition in the ICD-11 than per Dr. Herman's initial conceptualization. The ICD-11 definition of CPTSD overlaps more with DSM-5 PTSD than earlier definitions of PTSD.
Criticism of disorder and diagnosis
Though acceptance of the idea of complex PTSD has increased with mental health professionals, the research required for the proper validation of a new disorder was considered insufficient to include CPTSD as a separate disorder in the DSM-IV and DSM-5. The disorder was proposed under the name DES-NOS (Disorder of Extreme Stress Not Otherwise Specified) for inclusion in the DSM-IV but was rejected by members of the Diagnostic and Statistical Manual of Mental Disorders (DSM) committee of the American Psychiatric Association for lack of sufficient diagnostic validity research. Chief among the stated limitations was a study which showed that 95% of individuals who could be diagnosed with the proposed DES-NOS were also diagnosable with PTSD, raising questions about the added usefulness of an additional disorder.
Following the failure of DES-NOS to gain formal recognition in the DSM-IV, the concept was re-packaged for children and adolescents and given a new name, developmental trauma disorder. Supporters of DTD appealed to the developers of the DSM-5 to recognize DTD as a new disorder. Just as the developers of DSM-IV refused to included DES-NOS, the developers of DSM-5 refused to include DTD due to a perceived lack of sufficient research.
One of the main justifications offered for this proposed disorder has been that the current system of diagnosing PTSD plus comorbid disorders does not capture the wide array of symptoms in one diagnosis. Because individuals who suffered repeated and prolonged traumas often show PTSD plus other concurrent psychiatric disorders, some researchers have argued that a single broad disorder such as C-PTSD provides a better and more parsimonious diagnosis than the current system of PTSD plus concurrent disorders. Conversely, an article published in BioMed Central has posited there is no evidence that being labeled with a single disorder leads to better treatment than being labeled with PTSD plus concurrent disorders.
Complex PTSD embraces a wider range of symptoms relative to PTSD, specifically emphasizing problems of emotional regulation, negative self-concept, and interpersonal problems. Diagnosing complex PTSD can imply that this wider range of symptoms is caused by traumatic experiences, rather than acknowledging any pre-existing experiences of trauma which could lead to a higher risk of experiencing future traumas. It also asserts that this wider range of symptoms and higher risk of traumatization are related by hidden confounder variables and there is no causal relationship between symptoms and trauma experiences. In the diagnosis of PTSD, the definition of the stressor event is limited to life-threatening or sexually violent events, with the implication that these are typically sudden and unexpected events. Complex PTSD vastly widened the definition of potential stressor events by calling them adverse events, and deliberating dropping reference to life-threatening, so that experiences can be included such as neglect, emotional abuse, or living in a war zone without having specifically experienced life-threatening events. By broadening the stressor criterion, an article published by the Child and Youth Care Forum claims this has led to confusing differences between competing definitions of complex PTSD, undercutting the clear operationalization of symptoms seen as one of the successes of the DSM.
| Biology and health sciences | Mental disorders | Health |
3520977 | https://en.wikipedia.org/wiki/Synodontidae | Synodontidae | The Synodontidae or lizardfishes are benthic (bottom-dwelling) marine and estuarine bony fishes that belong to the aulopiform fish order, a diverse group of marine ray-finned fish consisting of some 15 extant and several prehistoric families. They are found in tropical and subtropical marine waters throughout the world.
Lizardfishes are generally small, although the largest species measures about in length. They have slender, somewhat cylindrical bodies, and heads that superficially resemble those of lizards. The dorsal fin is located in the middle of the back, and accompanied by a small adipose fin placed closer to the tail. They have mouths full of sharp teeth, even on the tongue.
Lizardfishes are benthic animals that live in shallow coastal waters; even the deepest-dwelling species of lizardfish live in waters no more than deep. Some species in the subfamily Harpadontinae live in brackish estuaries. They prefer sandy environments, and typically have body colours that help to camouflage them in such environments.
The larvae of lizardfishes are free-swimming. They are distinguished by the presence of black blotches in their guts, clearly visible through their transparent, scaleless skin.
Taxonomy
Three genera of the Synodontidae are known to inhabit the western Atlantic, including Synodus, represented by six species, Saurida, represented by four species, and Trachinocephalus, represented by a single species. The six species comprising the genus Synodus are S. intermedius, S. saurus, S. synodus, S. foetens, S. bondi, and S. macrostigmus. The four species comprising the genus Saurida are S. umeyoshii, S. pseudotumbil, S. undosquamis, and S. tumbil. The single species of Trachinocephalus is T. myops. The extinct Argillichthys is represented only by a single species, A. toombsi, from the Eocene-aged London Clay formation.
| Biology and health sciences | Aulopiformes | Animals |
3521050 | https://en.wikipedia.org/wiki/XOR%20gate | XOR gate | XOR gate (sometimes EOR, or EXOR and pronounced as Exclusive OR) is a digital logic gate that gives a true (1 or HIGH) output when the number of true inputs is odd. An XOR gate implements an exclusive or () from mathematical logic; that is, a true output results if one, and only one, of the inputs to the gate is true. If both inputs are false (0/LOW) or both are true, a false output results. XOR represents the inequality function, i.e., the output is true if the inputs are not alike otherwise the output is false. A way to remember XOR is "must have one or the other but not both".
An XOR gate may serve as a "programmable inverter" in which one input determines whether to invert the other input, or to simply pass it along with no change. Hence it functions as a inverter (a NOT gate) which may be activated or deactivated by a switch.
XOR can also be viewed as addition modulo 2. As a result, XOR gates are used to implement binary addition in computers. A half adder consists of an XOR gate and an AND gate. The gate is also used in subtractors and comparators.
The algebraic expressions or or or all represent the XOR gate with inputs A and B. The behavior of XOR is summarized in the truth table shown on the right.
Symbols
There are three schematic symbols for XOR gates: the traditional ANSI and DIN symbols and the IEC symbol. In some cases, the DIN symbol is used with ⊕ instead of ≢. For more information see Logic Gate Symbols.
The "=1" on the IEC symbol indicates that the output is activated by only one active input.
The logic symbols ⊕, Jpq, and ⊻ can be used to denote an XOR operation in algebraic expressions.
C-like languages use the caret symbol ^ to denote bitwise XOR. (Note that the caret does not denote logical conjunction (AND) in these languages, despite the similarity of symbol.)
Implementation
The XOR gate is most commonly implemented using MOSFETs circuits. Some of those implementations include:
AND-OR-Invert
XOR gates can be implemented using AND-OR-Invert (AOI) or OR-AND-Invert (OAI) logic.
CMOS
The metal–oxide–semiconductor (CMOS) implementations of the XOR gate corresponding to the AOI logic above
are shown below.
On the left, the nMOS and pMOS transistors are arranged so that the input pairs and activate the 2 pMOS transistors of the top left or the 2 pMOS transistors of the top right respectively, connecting Vdd to the output for a logic high. The remaining input pairs and activate each one of the two nMOS paths in the bottom to Vss for a logic low.
If inverted inputs (for example from a flip-flop) are available, this gate can be used directly. Otherwise, two additional inverters with two transistors each are needed to generate and , bringing the total number of transistors to twelve.
The AOI implementation without inverted input has been used, for example, in the Intel 386 CPU.
Transmission gates
The XOR gate can also be implemented through the use of transmission gates with pass transistor logic.
This implementation uses two Transmission gates and two inverters not shown in the diagram to generate and for a total of eight transistors, four less than in the previous design.
The XOR function is implemented by passing through to the output the inverted value of A when B is high and passing the value of A when B is at a logic low. so when both inputs are low the transmission gate at the bottom is off and the one at the top is on and lets A through which is low so the output is low. When both are high only the one at the bottom is active and lets the inverted value of A through and since A is high the output will again be low. Similarly if B stays high but A is low the output would be which is high as expected and if B is low but A is high the value of A passes through and the output is high completing the truth table for the XOR gate.
The trade-off with the previous implementation is that since transmission gates are not ideal switches, there is resistance associated with them, so depending on the signal strength of the input, cascading them may degrade the output levels.
Optimized pass-gate-logic wiring
The previous transmission gate implementation can be further optimized from eight to six transistors by implementing the functionality of the inverter that generates and the bottom pass-gate with just two transistors arranged like an inverter but with the source of the pMOS connected to instead of Vdd and the source of the nMOS connected to instead of GND.
The two leftmost transistors mentioned above, perform an optimized conditional inversion of A when B is at a logic high using pass transistor logic to reduce the transistor count and when B is at a logic low, their output is at a high impedance state. The two in the middle are a transmission gate that drives the output to the value of A when B is at a logic low and the two rightmost transistors form an inverter needed to generate used by the transmission gate and the pass transistor logic circuit.
As with the previous implementation, the direct connection of the inputs to the outputs through the pass gate transistors or through the two leftmost transistors, should be taken into account, especially when cascading them.</ref>
XOR with AND and NOR
Replacing the second NOR with a normal OR Gate will create an XNOR Gate.
Alternatives
If a specific type of gate is not available, a circuit that implements the same function can be constructed from other available gates. A circuit implementing an XOR function can be trivially constructed from an XNOR gate followed by a NOT gate. If we consider the expression , we can construct an XOR gate circuit directly using AND, OR and NOT gates. However, this approach requires five gates of three different kinds.
As alternative, if different gates are available we can apply Boolean algebra to transform as stated above, and apply de Morgan's Law to the last term to get which can be implemented using only four gates as shown on the right. intuitively, XOR is equivalent to OR except for when both A and B are high. So the AND of the OR with then NAND that gives a low only when both A and B are high is equivalent to the XOR.
An XOR gate circuit can be made from four NAND gates. In fact, both NAND and NOR gates are so-called "universal gates" and any logical function can be constructed from either NAND logic or NOR logic alone. If the four NAND gates are replaced by NOR gates, this results in an XNOR gate, which can be converted to an XOR gate by inverting the output or one of the inputs (e.g. with a fifth NOR gate).
An alternative arrangement is of five NOR gates in a topology that emphasizes the construction of the function from , noting from de Morgan's Law that a NOR gate is an inverted-input AND gate. Another alternative arrangement is of five NAND gates in a topology that emphasizes the construction of the function from , noting from de Morgan's Law that a NAND gate is an inverted-input OR gate.
For the NAND constructions, the upper arrangement requires fewer gates. For the NOR constructions, the lower arrangement offers the advantage of a shorter propagation delay (the time delay between an input changing and the output changing).
Standard chip packages
XOR chips are readily available. The most common standard chip codes are:
4070: CMOS quad dual input XOR gates.
4030: CMOS quad dual input XOR gates.
7486: TTL quad dual input XOR gates.
More than two inputs
Literal interpretation of the name "exclusive or", or observation of the IEC rectangular symbol, raises the question of correct behaviour with additional inputs. If a logic gate were to accept three or more inputs and produce a true output if exactly one of those inputs were true, then it would in effect be a one-hot detector (and indeed this is the case for only two inputs). However, it is rarely implemented this way in practice.
It is most common to regard subsequent inputs as being applied through a cascade of binary exclusive-or operations: the first two signals are fed into an XOR gate, then the output of that gate is fed into a second XOR gate together with the third signal, and so on for any remaining signals. The result is a circuit that outputs a 1 when the number of 1s at its inputs is odd, and a 0 when the number of incoming 1s is even. This makes it practically useful as a parity generator or a modulo-2 adder.
For example, the 74LVC1G386 microchip is advertised as a three-input logic gate, and implements a parity generator.
Applications
XOR gates and AND gates are the two most-used structures in VLSI applications.
Addition
The XOR logic gate can be used as a one-bit adder that adds any two bits together to output one bit. For example, if we add 1 plus 1 in binary, we expect a two-bit answer, 10 (i.e. 2 in decimal). Since the trailing sum bit in this output is achieved with XOR, the preceding carry bit is calculated with an AND gate. This is the main principle in Half Adders. A slightly larger Full Adder circuit may be chained together in order to add longer binary numbers.
In certain situations, the inputs to an OR gate (for example, in a full-adder) or to an XOR gate can never be both 1's. As this is the only combination for which the OR and XOR gate outputs differ, an OR gate may be replaced by an XOR gate (or vice versa) without altering the resulting logic. This is convenient if the circuit is being implemented using simple integrated circuit chips which contain only one gate type per chip.
Pseudo-random number generator
Pseudo-random number (PRN) generators, specifically linear-feedback shift registers (LFSR), are defined in terms of the exclusive-or operation. Hence, a suitable setup of XOR gates can model a linear-feedback shift register, in order to generate random numbers.
Phase detectors
XOR gates may be used in simplest phase detectors.
Buffer or invert a signal
An XOR gate may be used to easily change between buffering or inverting a signal. For example, XOR gates can be added to the output of a seven-segment display decoder circuit to allow a user to choose between active-low or active-high output.
Correlation and sequence detection
XOR gates produce a 0 when both inputs match. When searching for a specific bit pattern or PRN sequence in a very long data sequence, a series of XOR gates can be used to compare a string of bits from the data sequence against the target sequence in parallel. The number of 0 outputs can then be counted to determine how well the data sequence matches the target sequence. Correlators are used in many communications devices such as CDMA receivers and decoders for error correction and channel codes. In a CDMA receiver, correlators are used to extract the polarity of a specific PRN sequence out of a combined collection of PRN sequences.
A correlator looking for 11010 in the data sequence 1110100101 would compare the incoming data bits against the target sequence at every possible offset while counting the number of matches (zeros):
1110100101 (data)
11010 (target)
00111 (XOR) 2 zero bits
1110100101
11010
00000 5 zero bits
1110100101
11010
01110 2 zero bits
1110100101
11010
10011 2 zero bits
1110100101
11010
01000 4 zero bits
1110100101
11010
11111 0 zero bits
Matches by offset:
.
: :
: : : : :
-----------
0 1 2 3 4 5
In this example, the best match occurs when the target sequence is offset by 1 bit and all five bits match. When offset by 5 bits, the sequence exactly matches its inverse. By looking at the difference between the number of ones and zeros that come out of the bank of XOR gates, it is easy to see where the sequence occurs and whether or not it is inverted. Longer sequences are easier to detect than short sequences.
Analytical representation
is an analytical representation of XOR gate:
is an alternative analytical representation.
| Technology | Digital logic | null |
3521055 | https://en.wikipedia.org/wiki/Ranatra | Ranatra | Ranatra is a genus of slender predatory insects of the family Nepidae, known as water scorpions or water stick-insects. There are more than 140 Ranatra species found in freshwater habitats around the world, both in warm and temperate regions, with the highest diversity in South America (almost 50 species) and Asia (about 30 species, reviewed in 1972). Fewer are found elsewhere, but include several African, some in North America, three from Australia and three from the Palearctic, notably the relatively well-known European R. linearis. Since Ranatra belongs to the family Nepidae which in turn belongs to the order Hemiptera, Ranatra are considered "true bugs".
These brown insects are primarily found in stagnant or slow-moving water like ponds, marshes and canals, but can also be seen in streams. Exceptionally they have been recorded from hypersaline lakes and brackish lagoons.
Biology
The front legs of bugs in Ranatra are strong and used to grasp prey. They typically eat other insects, tadpoles and small fish, which they pierce with their proboscis and inject a saliva which both sedates and begins to digest their prey. They are sit-and-wait predators that reside among water plants and position themselves head-down with their grasping legs extended out to surprise passing prey. At least one species will also swim in open water at night to catch zooplanktonic organisms. Like other members in the family they have a long tail-like siphon, or breathing tube, on the rear end of their body. The adult body length is generally depending on the exact species, and females average larger than males of the same species. The siphon is typically almost the same size, but varies from less than half the body length to somewhat longer. Two of the largest species are the East Asian R. chinensis and South American R. magna. Ranatra do have wings and they can fly.
The adults are active year-round, except in extreme cold. Their eggs are positioned on plants just below the water surface, but in some species they can be placed in mud. The eggs typically take two to four weeks to hatch and the young take about two months to mature.
Among the four genera in the subfamily Ranatrinae, Austronepa and Goondnomdanepa are restricted to Australia. Cercotmetus from Asia to New Guinea resembles Ranatra, although the former has a distinctly shorter siphon.
Species
The Global Biodiversity Information Facility lists:
Ranatra absona Drake & De Carlo, 1953
Ranatra acapulcana Drake & De Carlo, 1953
Ranatra adelmorpha Nieser, 1975
Ranatra aethiopica Montandon, 1903
Ranatra akoitachta Nieser, 1996
Ranatra ameghinoi De Carlo, 1970
Ranatra annulipes Stål, 1854
Ranatra attenuata Kuitert, 1949
Ranatra australis Hungerford, 1922 i c g b (southern water scorpion)
Ranatra bachmanni De Carlo, 1954
Ranatra bilobata Tran & Nguyen, 2016
Ranatra biroi Lundblad, 1933
Ranatra bottegoi Montandon, 1903
Ranatra brasiliensis De Carlo, 1946
Ranatra brevicauda Montandon, 1905
Ranatra brevicollis Montandon, 1910 i c g b
Ranatra buenoi Hungerford, 1922 i c g b
Ranatra camposi Montandon, 1907
Ranatra capensis Germar, 1837
Ranatra cardamomensis Zettel, Phauk, Kheam & Freitag, 2017
Ranatra chagasi De Carlo, 1946
Ranatra chariensis Poisson, 1949
Ranatra chinensis Mayr, 1865
Ranatra cinnamomea Distant, 1904
Ranatra compressicollis Montandon, 1898
Ranatra costalimai De Carlo, 1954
Ranatra cruzi De Carlo, 1950
Ranatra curtafemorata Kuitert, 1949
Ranatra denticulipes Montandon, 1907
Ranatra digitata Hafiz & Pradhan, 1949
Ranatra diminuta Montandon, 1907
Ranatra dispar Montandon, 1903
Ranatra distanti Montandon, 1910
Ranatra doesburgi De Carlo, 1963
Ranatra dolichodentata Kuitert, 1949
Ranatra dormientis Zhang et al., 1994
Ranatra drakei Hungerford, 1922
Ranatra ecuadoriensis De Carlo, 1950
Ranatra elongata Fabricius, 1790
Ranatra emaciata Montandon, 1907
Ranatra fabricii Guérin-Méneville, 1857
Ranatra falloui Montandon, 1907
Ranatra feana Montandon, 1903
Ranatra fianarantsoana Poisson, 1963
Ranatra filiformis Fabricius, 1790
Ranatra flagellata Lansbury, 1972
Ranatra flokata Nieser & Burmeister, 1998
Ranatra fusca Palisot, 1820 i c g b (brown waterscorpion)
Ranatra fuscoannulata Distant, 1904
Ranatra galantae Nieser, 1969
Ranatra gracilis Dallas, 1850
Ranatra grandicollis Montandon, 1907
Ranatra grandocula Bergroth, 1893
Ranatra hechti De Carlo, 1967
Ranatra heoki Tran & Poggi, 2019
Ranatra heydeni Montandon, 1909
Ranatra horvathi Montandon, 1910
Ranatra hungerfordi Kuitert, 1949
Ranatra incisa Chen, Nieser & Ho, 2004
Ranatra instaurata Montandon, 1914
Ranatra insulata Barber, 1939
Ranatra jamaicana Drake & De Carlo, 1953
Ranatra katsara Nieser, 1997
Ranatra kirkaldyi Torre-bueno, 1905 i c g b
Ranatra lanei De Carlo, 1946
Ranatra lansburyi Chen, Nieser & Ho, 2004
Ranatra lenti De Carlo, 1950
Ranatra lethierryi Montandon, 1907
Ranatra libera Zettel, 1999
Ranatra linearis (Linnaeus, 1758) i c g
Ranatra longipes Stål, 1861
Ranatra lualalai Poisson, 1964
Ranatra lubwae Poisson, 1965
Ranatra machrisi Nieser & Burmeister, 1998
Ranatra macrophthalma Herrich-Schäffer, 1849
Ranatra maculosa Kuitert, 1949
Ranatra magna Kuitert, 1949
Ranatra malayana Lundblad, 1933
Ranatra mediana Montandon, 1910
Ranatra megalops Lansbury, 1972
Ranatra mixta Montandon, 1907
Ranatra moderata Kuitert, 1949
Ranatra montei De Carlo, 1946
Ranatra montezuma Polhemus, 1976
Ranatra natalensis Distant, 1904
Ranatra natunaensis Lansbury, 1972
Ranatra neivai De Carlo, 1946
Ranatra nieseri Tran & Nguyen, 2016
Ranatra nigra Herrich-Schaeffer, 1849
Ranatra nodiceps Gerstaecker, 1873
Ranatra nodioeps Gerstaecker, 1873
Ranatra obscura Montandon, 1907
Ranatra occidentalis Lansbury, 1972
Ranatra odontomeros Nieser, 1996
Ranatra oliveiracesari De Carlo, 1946
Ranatra operculata Kuitert, 1949
Ranatra ornitheia Nieser, 1975
Ranatra parmata Mayr, 1865
Ranatra parvipes Signoret, 1861
Ranatra parvula Kuitert, 1949
Ranatra pittieri Montandon, 1910
Ranatra protense Montandon
Ranatra quadridentata Stål, 1862 i c g b
Ranatra rabida Buchanan White, 1879
Ranatra rafflesi Tran & D.Polhemus, 2012
Ranatra rapax Stål, 1865
Ranatra recta Chen, Nieser & Ho, 2004
Ranatra robusta Montandon, 1905
Ranatra sagrai Drake & De Carlo, 1953
Ranatra sarmientoi De Carlo, 1967
Ranatra sattleri De Carlo, 1967
Ranatra schuhi D.Polhemus & J.Polhemus, 2012
Ranatra segrega Montandon, 1913
Ranatra signoreti Montandon, 1905
Ranatra similis Drake & De Carlo, 1953
Ranatra siolii De Carlo, 1970
Ranatra sjostedti Montandon, 1911
Ranatra spatulata Kuitert, 1949
Ranatra spinifrons Montandon, 1905
Ranatra spoliata Montandon, 1912
Ranatra stali Montandon, 1905
Ranatra sterea Chen, Nieser & Ho, 2004
Ranatra subinermis Montandon, 1907
Ranatra sulawesii Nieser & Chen, 1991
Ranatra surinamensis De Carlo, 1963
Ranatra texana Hungerford, 1930
Ranatra thai Lansbury, 1972
Ranatra titilaensis Hafiz & Pradhan, 1949
Ranatra travassosi De Carlo, 1950
Ranatra tridentata Poisson, 1965
Ranatra tuberculifrons Montandon, 1907
Ranatra unicolor Scott, 1874
Ranatra unidentata Stål, 1861
Ranatra usingeri De Carlo, 1970
Ranatra varicolor Distant, 1904
Ranatra varipes Stål, 1861
Ranatra vitshumbii Poisson, 1949
Ranatra wagneri Hungerford, 1929
Ranatra weberi De Carlo, 1970
Ranatra williamsi Kuitert, 1949
Ranatra zeteki Drake & De Carlo, 1953
Data sources: i = ITIS, c = Catalogue of Life, g = GBIF, b = Bugguide.net
| Biology and health sciences | Hemiptera (true bugs) | Animals |
3521816 | https://en.wikipedia.org/wiki/XNOR%20gate | XNOR gate | The XNOR gate (sometimes ENOR, EXNOR, NXOR, XAND and pronounced as Exclusive NOR) is a digital logic gate whose function is the logical complement of the Exclusive OR (XOR) gate. It is equivalent to the logical connective () from mathematical logic, also known as the material biconditional. The two-input version implements logical equality, behaving according to the truth table to the right, and hence the gate is sometimes called an "equivalence gate". A high output (1) results if both of the inputs to the gate are the same. If one but not both inputs are high (1), a low output (0) results.
The algebraic notation used to represent the XNOR operation is . The algebraic expressions and both represent the XNOR gate with inputs A and B.
Symbols
There are two symbols for XNOR gates: one with distinctive shape and one with rectangular shape and label. Both symbols for the XNOR gate are that of the XOR gate with an added inversion bubble.
Hardware description
XNOR gates are represented in most TTL and CMOS IC families. The standard 4000 series CMOS IC is the 4077, and the TTL IC is the 74266 (although an open-collector implementation). Both include four independent, two-input, XNOR gates. The (now obsolete) 74S135 implemented four two-input XOR/XNOR gates or two three-input XNOR gates.
Both the TTL 74LS implementation, the 74LS266, as well as the CMOS gates (CD4077, 74HC4077 and 74HC266 and so on) are available from most semiconductor manufacturers such as Texas Instruments or NXP, etc. They are usually available in both through-hole DIP and SOIC formats (SOIC-14, SOC-14 or TSSOP-14).
Datasheets are readily available in most datasheet databases and suppliers.
Implementation
AND-OR-Invert logic
An XNOR gate can be implemented using a NAND gate and an OR-AND-Invert gate, as shown in the following picture.
This is based on the identity
An alternative, which is useful when inverted inputs are also available (for example from a flip-flop), uses a 2-2 AND-OR-Invert gate, shown on below on the right.
CMOS
CMOS implementations based on the OAI logic above can be realized with 10 transistors, as shown below. The implementation which uses both normal and inverted inputs uses 8 transistors, or 12 if inverters have to be used.
Pinout
Both the 4077 and 74x266 devices (SN74LS266, 74HC266, 74266, etc.) have the same pinout diagram, as follows:
Pinout diagram of the 74HC266N, 74LS266 and CD4077 quad XNOR plastic dual in-line package 14-pin package (PDIP-14) ICs.
Alternatives
If a specific type of gate is not available, a circuit that implements the same function can be constructed from other available gates. A circuit implementing an XNOR function can be trivially constructed from an XOR gate followed by a NOT gate. If we consider the expression , we can construct an XNOR gate circuit directly using AND, OR and NOT gates. However, this approach requires five gates of three different kinds.
As alternative, if different gates are available we can apply Boolean algebra to transform as stated above, and apply de Morgan's Law to the last term to get which can be implemented using only three gates as shown on the right.
An XNOR gate circuit can be made from four NOR gates. In fact, both NAND and NOR gates are so-called "universal gates" and any logical function can be constructed from either NAND logic or NOR logic alone. If the four NOR gates are replaced by NAND gates, this results in an XOR gate, which can be converted to an XNOR gate by inverting the output or one of the inputs (e.g. with a fifth NAND gate).
An alternative arrangement is of five NAND gates in a topology that emphasizes the construction of the function from , noting from de Morgan's Law that a NAND gate is an inverted-input OR gate. Another alternative arrangement is of five NOR gates in a topology that emphasizes the construction of the function from , noting from de Morgan's Law that a NOR gate is an inverted-input AND gate.
For the NAND constructions, the lower arrangement offers the advantage of a shorter propagation delay (the time delay between an input changing and the output changing). For the NOR constructions, the upper arrangement requires fewer gates.
From the opposite perspective, constructing other gates using only XNOR gates is possible though XNOR is not a fully universal logic gate. NOT and XOR gates can be constructed this way.
More than two inputs
Although other gates (OR, NOR, AND, NAND) are available from manufacturers with three or more inputs per gate, this is not strictly true with XOR and XNOR gates. However, extending the concept of the binary logical operation to three inputs, the SN74S135 with two shared "C" and four independent "A" and "B" inputs for its four outputs, was a device that followed the truth table:
This is effectively Q = NOT ((A XOR B) XOR C). Another way to interpret this is that the output is true if an even number of inputs are true. It does not implement a logical "equivalence" function, unlike two-input XNOR gates.
| Technology | Digital logic | null |
1853027 | https://en.wikipedia.org/wiki/Intel%20High%20Definition%20Audio | Intel High Definition Audio | Intel High Definition Audio (IHDA) (also called HD Audio or development codename Azalia) is a specification for the audio sub-system of personal computers. It was released by Intel in 2004 as the successor to their AC'97 PC audio standard.
Features
The Intel High Definition Audio specification includes the following features:
Up to 15 input and 15 output streams
Up to 16 PCM audio channels per stream
Sample resolutions of 8–32 bits
Sample rates of 6–192 kHz
Support for audio codecs (e.g., ADC, DAC), modem codecs, and vendor-defined codecs
Discoverable codec architecture
Fine-grained codec power-control
Audio jack detection, sensing, and retasking
Motherboards typically do not have any more than eight built-in output channels (7.1 surround sound) and four input channels (back and front panel microphone inputs, and a back-panel stereo line-in). Users requiring more audio I/Os will typically opt for a sound card or an external audio interface, as these provide additional features that are more oriented towards professional audio applications.
Operating system support
The Service Pack 3 update to Windows XP and all later versions of Windows (from Vista onwards) included the Universal Audio Architecture (UAA) class driver, which supported audio devices built to HD Audio's specifications. Retrospective UAA drivers were also built for Windows 2000, Server 2003 and XP Service Pack 1/2. macOS provides support for Intel HD Audio with its AppleHDA driver. Several Linux operating systems also support HD Audio, as well as OpenSolaris, FreeBSD, and OpenBSD.
Host controller
Like AC'97, HD Audio acts as a device driver, defining the architecture, link frame format, and programming interfaces used in the hardware of the host controller of the PCI bus and linking it to a codec used by a computer's software. Configurations of the host controller (Chipset) are available from third-party suppliers, including Nvidia, VIA and AMD, while codecs have also been provided by third-party suppliers including Realtek, Conexant, IDT, VIA, SigmaTel, Analog Devices, C-Media and Cirrus Logic. AMD's TRX40 chipset was introduced in 2019 for use with Ryzen "Threadripper" CPUs, which provided the Realtek ALC1220 chip instead of the HD Audio interface. As a result, a separate USB or PCIe audio device was required to integrate HD audio codecs on TRX40 motherboards. Intel has also decoupled the audio controller from its chipsets in favor of Intel Smart Sound Technology (SST) or I²S instead of the more traditional HD Audio Bus.
Limitations
As with the previous AC'97 standard, HD Audio does not specify handlers for the media buttons attached to headphone jacks (i.e., Play/Pause, Next, Previous, Volume up, Volume down).
Front panel connector
Computer motherboards often provide a connector to bring microphone and headphone signals to the computer's front panel. Intel provides a general specification for this process, but the signal assignments are different for both AC'97 and HD Audio headers.
The pin assignments for the AC'97 and HD Audio connectors are:
The HD Audio 3.5 mm subminiature audio jack differed from connectors used in the AC'97 specification and in general audio equipment. The AC'97 used a regular 3.5 mm audio jack, which typically has 5 pins: one pin for ground, two pins for stereo signal, and two pins for the return signal. When no plug is connected, the two stereo signals are connected to their return pins. When a plug is inserted, the stereo signals contact the respective channels on the plug and are disconnected from the jack's return pins. The HD Audio 3.5 mm jack does not have the two return audio signals; instead, it has an isolated switch that senses the presence of a plug in the jack.
In the AC'97 design, the audio output is sent to the jack by default. When a headphone is detected, the return signal pins for the speakers are disconnected, directing the audio to the headphone. The jack redirects the audio to the speakers if no headphone connection is detected. Similarly, the return pins ground the microphone jack connection if no microphone detected. As a result, most motherboards with AC'97 audio require two jumpers to short these pins if no front panel audio module is connected, so audio passes to the speakers.
In the HD Audio design, the codec sends the audio directly to the speakers if a plug is not inserted. When a plug is inserted, the isolated switch inside the jack informs the motherboard, and the codec sends audio to the headphones. A similar isolated switch is used to detect when a microphone has been plugged in. HD Audio can also sense the presence of an audio dongle. A 10 kΩ pull-up resistor is attached to pin 4 (). When the HDA dongle is plugged in, it pulls pin 4 to the ground with a 1 kΩ resistor. The motherboard can determine if a dongle is connected by examining the logic level on pin 4. If the motherboard does not detect a HDA dongle, it should ignore the (pin 6) and (pin 10) signals.
Intel warns that HDA dongles should be used with HDA motherboards:
The different signal assignments can cause trouble when AC'97 front-panel dongles are used with HDA motherboards and vice versa. An AC'97 dongle returns audio on pins 6 and 10 rather than digital plug sensing signals. Consequently, a loud audio passage may cause a HDA motherboard with a AC'97 dongle believe headphones and microphones are being plugged and unplugged hundreds of times per second. An AC'97 motherboard with an HDA dongle will route the AC'97 5 V audio supply (pin 7; silence) to the speakers instead of the desired left and right audio signals. To avoid this, some motherboards allow choosing between HDA and AC'97 front panels in the BIOS. Even though the actual audio hardware is HD Audio, the BIOS can be manipulated to allow the use of an AC'97 front panel. Likewise, some modern enclosures have both an "AC'97" and an "HDA" plug at the end of the front-panel audio cable.
| Technology | Computer hardware | null |
1853763 | https://en.wikipedia.org/wiki/Post-nasal%20drip | Post-nasal drip | Post-nasal drip (PND), also known as upper airway cough syndrome (UACS), occurs when excessive mucus is produced by the nasal mucosa. The excess mucus accumulates in the back of the nose, and eventually in the throat once it drips down the back of the throat. It can be caused by rhinitis, sinusitis, gastroesophageal reflux disease (GERD), or by a disorder of swallowing (such as an esophageal motility disorder). Other causes can be allergy, cold, flu, and side effects from medications.
However, some researchers argue that the flow of mucus down the back of the throat from the nasal cavity is a normal physiologic process that occurs in all healthy individuals. Some researchers challenge post-nasal drip as a syndrome and instead view it as a symptom, also taking into account variation across different societies. Furthermore, this rebuttal is reinforced because of the lack of an accepted definition, pathologic tissue changes, and available biochemical tests.
Signs and symptoms
PND may present itself through the constant presence of discomfort in the upper airways. It is classically described as the sensation of a substance "dripping down the throat" and may also present with rhinorrhea, constant throat clearing, and cough, although its symptoms can be very nonspecific. PND is one of the most common etiologies for chronic cough, defined as a cough persisting beyond 8 weeks.
Post-nasal drip can be a cause of laryngeal inflammation and hyperresponsiveness, leading to symptoms of vocal cord dysfunction.
Causes
There are multiple causes of PND, which can be acute or chronic.
GERD
Gastroesophageal reflux disease (GERD) is often associated with a high prevalence of upper-respiratory symptoms similar to those of PND, such as coughing, throat clearing, hoarseness and change in voice. Reflux causes throat irritation, leading to a sensation of increased mucus in the throat, which is believed to aggravate and, in some cases, cause post-nasal drip.
Allergic rhinitis
Allergic rhinitis (AR) is a common condition where exposure to allergens results in the release of inflammatory mediators, such as histamine, that causes sneezing, rhinorrhea, itchy eyes, and nasal congestion. The increased rhinorrhea and mucus production can result in PND.
Non-allergic rhinitis
Non-allergic rhinitis (NAR) is a condition in which there are symptoms of rhinitis, including rhinorrhea and nasal obstruction, but with negative skin and serum allergy testing results. It can be further categorized into:
Non-allergic rhinitis with eosinophilia (NARES)
Hormonal rhinitis (such as during pregnancy)
Medication-induced rhinitis
Atrophic rhinitis
Irritant and occupational rhinitis (including tobacco smoke, cleaning supplies, etc.)
Idiopathic nonallergic rhinitis
Rhinosinusitis
Rhinosinusitis is inflammation or infection of the sinus cavities. Acute rhinosinusitis has symptoms lasting less than four weeks, while chronic rhinosinusitis lasts greater than 12 weeks. This persistent irritation can lead to increased mucus production as a result of pro-inflammatory pathways, producing symptoms of PND.
Mechanism
The exact mechanism of PND depends on its etiology, but usually involves increased production of mucus from the nasal mucosa. In addition to providing sense of smell, the nasal cavity serves to filter and regulate the temperature and humidity of inspired air. The nasal mucosa can produce secretions, or mucus, that provides lubrication and protection for the nasal cavity. This mucus production is activated by the autonomic nervous system; specifically, cholinergic neuropeptides are responsible for increasing mucus production. Excess mucus can drain posteriorly into the upper and lower airways, which, along with other physical and chemical irritants, can activate receptors in the respiratory tract that results in a protective physiological cough.
Diagnosis
Diagnosis of PND depends on both a detailed history and clinical examination to help determine its etiology. The history may begin with feelings of obstructed nasal breathing or "stuffy nose" with or without nasal discharge. If allergic rhinitis is suspected, a family history of allergic conditions as well as a personal history of other associated conditions such as food allergy, asthma, and atopic dermatitis can be evaluated. Allergic rhinitis classically has more symptoms of sneezing attacks, itchy eyes, and respiratory problems, although it is difficult to distinguish the different types of rhinitis by symptomology alone. Visual inspection can reveal mouth breathing, which is suggestive of nasal obstruction, or a horizontal crease across the nose (caused by the "allergic salute").
In the absence of any specific diagnostic tests, it may be difficult to diagnose PND from history of symptoms alone, as the etiology is broad and the symptoms may be very general. As such, suggestive procedures that highlight rhinitis and mucopurulent secretions, such as nasoendoscopy, may instead be utilized because of the vague nature of information available to directly attribute specific symptoms to the syndrome.
Treatment
Treatment options depend on the nature of an individual's post-nasal drip and its cause. Antibiotics may be prescribed if the PND is the result of bacterial sinusitis. In cases where PND is caused by allergic rhinitis or irritant rhinitis, avoidance of allergens or irritating factors such as dander, cigarette smoke, and cleaning supplies may be beneficial. Antihistamines are particularly useful for allergic rhinitis and may be beneficial in some cases of non-allergic rhinitis. First-generation antihistamines such as chlorpheniramine and clemastine are more potent but have greater sedatory effects; later-generation antihistamines may be used to reduce these effects. Azelastine, a topical antihistamine, is approved for both allergic and non-allergic rhinitis due to its unique anti-inflammatory effects separate from its histamine receptor antagonism.
Intranasal steroids may also be beneficial in patients who do not respond to antihistamines. In one meta-analysis, intranasal steroids were shown to improve symptoms of non-allergic rhinitis at up to four weeks better than a placebo. Decongestants such as pseudoephedrine can tighten blood vessels of the nasal mucosa and result in a decrease in mucus production. Anticholinergics such as ipratropium bromide can help reduce secretions by blocking parasympathetic effects on the nasal mucosa.
One study has found that symptoms of postnasal drainage improved after 8 to 16 weeks of lansoprazole 30 mg taken twice daily regardless of the presence or absence of typical symptoms of GERD.
Other methods, such as drinking warm fluids and using saline nasal irrigation, may be useful for managing symptoms of PND but their exact efficacy is unclear in medical literature.
Epidemiology
Because PND is often characterized as a "symptom" rather than a separate condition, the exact incidence is unknown and varies by its etiology. Chronic rhinitis, which includes allergic and non-allergic rhinitis, can affect 30-40% of the population. Non-allergic rhinitis is more common in females than in males.
| Biology and health sciences | Symptoms and signs | Health |
1853838 | https://en.wikipedia.org/wiki/Web%20content%20management%20system | Web content management system | A web content management system (WCM or WCMS) is a software content management system (CMS) specifically for web content. It provides website authoring, collaboration, and administration tools that help users with little knowledge of web programming languages or markup languages create and manage website content. A WCMS provides the foundation for collaboration, providing users the ability to manage documents and output for multiple author editing and participation. Most systems use a content repository or a database to store page content, metadata, and other information assets the system needs.
A presentation layer (template engine) displays the content to website visitors based on a set of templates, which are sometimes XSLT files.
Most systems use server side caching to improve performance. This works best when the WCMS is not changed often but visits happen frequently. Administration is also typically done through browser-based interfaces, but some systems require the use of a fat client.
Capabilities
A web content management system controls a dynamic collection of web material, including HTML documents, images, and other forms of media. A WCMS facilitates document control, auditing, editing, and timeline management. A WCMS typically has the following features:
Automated templates Create standard templates (usually HTML and XML) that users can apply to new and existing content, changing the appearance of all content from one central place.
Access control Some WCMS systems support user groups, which control how registered users interact with the site. A page on the site can be restricted to one or more groups. This means an anonymous user (someone not logged-on), or a logged on user who is not a member of the group a page is restricted to, is denied access.
Scalable expansion Available in most modern WCMSs.is the ability to expand a single implementation (one installation on one server) across multiple domains, depending on the server's settings. WCMS sites may be able to create microsites/web portals within a main site as well.
Easily editable content Once content is separated from the visual presentation of a site, it usually becomes much easier and quicker to edit and manipulate. Most WCMS software includes WYSIWYG editing tools allowing non-technical users to create and edit content.
Scalable feature sets Most WCMS software includes plug-ins or modules that can be easily installed to extend an existing site's functionality.
Web standards upgrades Active WCMS software usually receives regular updates that include new feature sets and keep the system up to current web standards.
Workflow management Workflow management is the process of creating cycles of sequential and parallel tasks that must be accomplished in the WCMS. For example, one or many content creators can submit a story, but it is not published until the copy editor cleans it up and the editor-in-chief approves it.
Collaboration WCMS software may act as a collaboration platform where many users retrieve and work on content. Changes can be tracked and authorized for publication or ignored reverting to old versions. Other advanced forms of collaboration allow multiple users to modify (or comment) a page at the same time in a collaboration session.
Delegation Some WCMS software allows for various user groups to have limited privileges over specific content on the website, spreading out the responsibility of content management.
Document management WCMS software may provide a means of collaboratively managing the life cycle of a document from initial creation time, through revisions, publication, archive, and document destruction.
Content virtualization WCMS software may provide a means of allowing each user to work within a virtual copy of the entire website, document set, and/or code base. This enables viewing changes to multiple interdependent resources in context before submission.
Content syndication WCMS software often helps distribute content by generating RSS and Atom data feeds to other systems. They may also e-mail users when updates become available.
Multilingual Many WCMSs can display content in multiple languages.
Versioning Like document management systems, WCMS software may implement version control, by which users check pages in and out of the WCMS. Authorized editors can retrieve previous versions and work from a selected point. Versioning is useful for content that changes and requires updating, but it may be necessary to start from or reference a previous version.
Types
A WCMS can use one of three approaches: offline processing, online processing, and hybrid processing. These terms describe the deployment pattern for the WCMS in terms of when it applies presentation templates to render web pages from structured content.
Offline processing
FThese systems, sometimes referred to as "static site generators", pre-process all content, applying templates before publication to generate web pages. Since pre-processing systems do not require a server to apply the templates at request time, they may also exist purely as design-time tools.
Online processing
These systems apply templates on-demand. They may generate HTML when a user visits the page, or the user might receive pre-generated HTML from a web cache. Most open source WCMSs support add-ons that extended the system's capabilities. These include features like forums, blogs, wikis, web stores, photo galleries, and contact management. These are variously called modules, nodes, widgets, add-ons, or extensions.
Hybrid processing
JavaServer Pages|Some systems combine the offline and online approaches. Some systems write out executable code (e.g., JSP, ASP, PHP, ColdFusion, or Perl pages) rather than just static HTML. That way, personnel don't have to deploy the WCMS itself on every web server. Other hybrids operate in either an online or offline mode.
Advantages
Low cost Some content management systems are free, such as Drupal, eZ Publish, TYPO3, Joomla, Zesty.io, and WordPress. Others may be affordable based on size subscriptions. Although subscriptions can be expensive, overall the cost of not having to hire full-time developers can lower the total costs. Plus software can be bought based on need for many WCMSs.
Easy customization A universal layout is created, making pages have a similar theme and design without much code. Many WCMS tools use a drag and drop AJAX system for their design modes. It makes it easy for beginner users to create custom front-ends.
Easy to use WCMSs accommodate non-technical people. Simplicity in design of the admin UI lets website content managers and other users update content without much training in coding or system maintenance.
Workflow management WCMSs provide the facility to control how content is published, when it is published, and who publishes it. Some WCMSs allow administrators to set up rules for workflow management, guiding content managers through a series of steps required for each of their tasks.
Good For SEO WCMS websites also accommodate search engine optimization (SEO). Content freshness helps, as some search engines prefer websites with newer content. Social media plugins help build a community around content. RSS feeds automatically generated by blogs, or WCMS websites can increase the number of subscribers and readers to a site. URL rewriting can be implemented easily—clean URLs without parameters further help in SEO.Some plugins specifically help with website SEO.
Disadvantages
Cost of implementations Larger scale implementations may require training, planning, and certifications. Certain WCMSs may require hardware installation. Commitment to the software is required on bigger investments. Commitment to training, developing, and upkeep are costs incurred in any enterprise system.
Cost of maintenance Maintaining WCMSs may require license updates, upgrades, and hardware maintenance.
Latency issues Larger WCMSs can experience latency if hardware infrastructure is not up-to-date, databases are used incorrectly, or web cache files that reload every time data updates grow too large. Load balancing issues may also impair caching files.
Tool mixing Because the URLs of many WCMSs are dynamically generated with internal parameters and reference information, they are often not stable enough for static pages and other web tools, particularly search engines, to rely on them.
Security WCMS's are often forgotten about when hardware, software, and operating systems are patched for security threats. Due to lack of patching by the user, a hacker can use unpatched WCMS software to exploit vulnerabilities to enter an otherwise secure environment. WCMS's should be part of an overall, holistic security patch management program to maintain the highest possible security standards.
| Technology | Computer software | null |
1854949 | https://en.wikipedia.org/wiki/Stibine | Stibine | Stibine (IUPAC name: stibane) is a chemical compound with the formula SbH3. A pnictogen hydride, this colourless, highly toxic gas is the principal covalent hydride of antimony, and a heavy analogue of ammonia. The molecule is pyramidal with H–Sb–H angles of 91.7° and Sb–H distances of 170.7 pm (1.707 Å). The smell of this compound from usual sources (like from reduction of antimony compounds) is reminiscent of arsine, i.e. garlic-like.
Preparation
SbH3 is generally prepared by the reaction of Sb3+ sources with H− equivalents:
2 Sb2O3 + 3 LiAlH4 → 4 SbH3 + 1.5 Li2O + 1.5 Al2O3
4 SbCl3 + 3 NaBH4 → 4 SbH3 + 3 NaCl + 3 BCl3
Alternatively, sources of Sb3− react with protonic reagents (even water) to also produce this unstable gas:
Na3Sb + 3 H2O → SbH3 + 3 NaOH
Properties
The chemical properties of SbH3 resemble those for AsH3. Typical for a heavy hydride (e.g. AsH3, H2Te, SnH4), SbH3 is unstable with respect to its elements. The gas decomposes slowly at room temperature but rapidly at 200 °C:
2 SbH3 → 3 H2 + 2 Sb
The decomposition is autocatalytic and can be explosive.
SbH3 is readily oxidized by O2 or even air:
2 SbH3 + 3 O2 → Sb2O3 + 3 H2O
SbH3 exhibits no basicity, but it can be deprotonated:
SbH3 + NaNH2 → NaSbH2 + NH3
The salt is called sodium stibinide, and contains the stibinide anion .
Uses
Stibine is used in the semiconductor industry to dope silicon with small quantities of antimony via the process of chemical vapour deposition (CVD). It has also been used as a silicon dopant in epitaxial layers. Reports claim the use of SbH3 as a fumigant but its instability and awkward preparation contrast with the more conventional fumigant phosphine.
History
As stibine (SbH3) is similar to arsine (AsH3); it is also detected by the Marsh test. This sensitive test detects arsine generated in the presence of arsenic. This procedure, developed circa 1836 by James Marsh, treats a sample with arsenic-free zinc and dilute sulfuric acid: if the sample contains arsenic, gaseous arsine will form. The gas is swept into a glass tube and decomposed by means of heating around 250 – 300 °C. The presence of arsenic is indicated by formation of a deposit in the heated part of the equipment. The formation of a black mirror deposit in the cool part of the equipment indicates the presence of antimony.
In 1837 Lewis Thomson and Pfaff independently discovered stibine. It took some time before the properties of the toxic gas could be determined, partly because a suitable synthesis was not available. In 1876 Francis Jones tested several synthesis methods, but it was not before 1901 when Alfred Stock determined most of the properties of stibine.
Safety
SbH3 is an unstable flammable gas. It is highly toxic, with an LC50 of 100 ppm in mice.
Toxicology
The toxicity of stibine is distinct from that of other antimony compounds, but similar to that of arsine. Stibine binds to the haemoglobin of red blood cells, causing them to be destroyed by the body. Most cases of stibine poisoning have been accompanied by arsine poisoning, although animal studies indicate that their toxicities are equivalent. The first signs of exposure, which can take several hours to become apparent, are headaches, vertigo, and nausea, followed by the symptoms of hemolytic anemia (high levels of unconjugated bilirubin), hemoglobinuria, and nephropathy.
| Physical sciences | Hydrogen compounds | Chemistry |
1855357 | https://en.wikipedia.org/wiki/Ecosystem%20service | Ecosystem service | Ecosystem services are the various benefits that humans derive from healthy ecosystems. These ecosystems, when functioning well, offer such things as provision of food, natural pollination of crops, clean air and water, decomposition of wastes, or flood control. Ecosystem services are grouped into four broad categories of services. There are provisioning services, such as the production of food and water. Regulating services, such as the control of climate and disease. Supporting services, such as nutrient cycles and oxygen production. And finally there are cultural services, such as spiritual and recreational benefits. Evaluations of ecosystem services may include assigning an economic value to them.
For example, estuarine and coastal ecosystems are marine ecosystems that perform the four categories of ecosystem services in several ways. Firstly, their provisioning services include marine resources and genetic resources. Secondly, their supporting services include nutrient cycling and primary production. Thirdly, their regulating services include carbon sequestration (which helps with climate change mitigation) and flood control. Lastly, their cultural services include recreation and tourism.
The Millennium Ecosystem Assessment (MA) in the early 2000s has made this concept better known.
Definition
Ecosystem services or eco-services are defined as the goods and services provided by ecosystems to humans. Per the 2006 Millennium Ecosystem Assessment (MA), ecosystem services are "the benefits people obtain from ecosystems". The MA also delineated the four categories of ecosystem services into provisioning, regulating, supporting, and cultural.
By 2010, there had evolved various working definitions and descriptions of ecosystem services in the literature. To prevent double-counting in ecosystem services audits, for instance, The Economics of Ecosystems and Biodiversity (TEEB) replaced "Supporting Services" in the MA with "Habitat Services" and "ecosystem functions", defined as "a subset of the interactions between ecosystem structure and processes that underpin the capacity of an ecosystem to provide goods and services".
While Gretchen Daily's original definition distinguished between ecosystem goods and ecosystem services, Robert Costanza and colleagues' later work and that of the Millennium Ecosystem Assessment lumped all of these together as ecosystem services.
Categories
Four different types of ecosystem services have been distinguished by the scientific body: regulating services, provisioning services, cultural services and supporting services. An ecosystem does not necessarily offer all four types of services simultaneously; but given the intricate nature of any ecosystem, it is usually assumed that humans benefit from a combination of these services. The services offered by diverse types of ecosystems (forests, seas, coral reefs, mangroves, etc.) differ in nature and in consequence. In fact, some services directly affect the livelihood of neighboring human populations (such as fresh water, food or aesthetic value, etc.) while other services affect general environmental conditions by which humans are indirectly impacted (such as climate change, erosion regulation or natural hazard regulation, etc.).
The Millennium Ecosystem Assessment report 2005 defined ecosystem services as benefits people obtain from ecosystems and distinguishes four categories of ecosystem services, where the so-called supporting services are regarded as the basis for the services of the other three categories.
Provisioning services
Provisioning services consist of all "the products obtained from ecosystems". The following services are also known as ecosystem goods:
food (including seafood and game), crops, wild foods, and spices
raw materials (including lumber, skins, fuelwood, organic matter, fodder, and fertilizer)
genetic resources (including crop improvement genes, and health care)
biogenic minerals
medicinal resources (including pharmaceuticals, chemical models, and test and assay organisms)
energy (hydropower, biomass fuels)
ornamental resources (including fashion, handicrafts, jewelry, pets, worship, decoration, and souvenirs like furs, feathers, ivory, orchids, butterflies, aquarium fish, shells, etc.)
Forests and forest management produce a large type and variety of timber products, including roundwood, sawnwood, panels, and engineered wood, e.g., cross-laminated timber, as well as pulp and paper. Besides the production of timber, forestry activities may also result in products that undergo little processing, such as fire wood, charcoal, wood chips and roundwood used in an unprocessed form. Global production and trade of all major wood-based products recorded their highest ever values in 2018. Production, imports and exports of roundwood, sawnwood, wood-based panels, wood pulp, wood charcoal and pellets reached their maximum quantities since 1947 when FAO started reporting global forest product statistics. In 2018, growth in production of the main wood-based product groups ranged from 1 percent (woodbased panels) to 5 percent (industrial roundwood). The fastest growth occurred in the Asia-Pacific, Northern American and European regions, likely due to positive economic growth in these areas. Over 40% of the territory in the European Union is covered by forests. This region has grown via afforestation by roughly 0.4% year in recent decades. In the European Union, just 60% of the yearly forest growth is harvested.
Forests also provide non-wood forest products, including fodder, aromatic and medicinal plants, and wild foods. Worldwide, around 1 billion people depend to some extent on wild foods such as wild meat, edible insects, edible plant products, mushrooms and fish, which often contain high levels of key micronutrients. The value of forest foods as a nutritional resource is not limited to low- and middle-income countries; more than 100 million people in the European Union (EU) regularly consume wild food. Some 2.4 billion people – in both urban and rural settings – use wood-based energy for cooking.
Regulating services
Regulating services are the "benefits obtained from the regulation of ecosystem processes". These include:
Purification of water and air
Carbon sequestration (this contributes to climate change mitigation)
Waste decomposition and detoxification
Predation regulates prey populations
Biological control pest and disease control
Pollination
Disturbance regulation, i.e. flood protection
Water purification
An example for water purification as an ecosystem service is as follows: In New York City, where the quality of drinking water had fallen below standards required by the U.S. Environmental Protection Agency (EPA), authorities opted to restore the polluted Catskill Watershed that had previously provided the city with the ecosystem service of water purification. Once the input of sewage and pesticides to the watershed area was reduced, natural abiotic processes such as soil absorption and filtration of chemicals, together with biotic recycling via root systems and soil microorganisms, water quality improved to levels that met government standards. The cost of this investment in natural capital was estimated at $1–1.5 billion, which contrasted dramatically with the estimated $6–8 billion cost of constructing a water filtration plant plus the $300 million annual running costs.
Pollination
Pollination of crops by bees is required for 15–30% of U.S. food production; most large-scale farmers import non-native honey bees to provide this service. A 2005 study reported that in California's agricultural region, it was found that wild bees alone could provide partial or complete pollination services or enhance the services provided by honey bees through behavioral interactions. However, intensified agricultural practices can quickly erode pollination services through the loss of species. The remaining species are unable to compensate this. The results of this study also indicate that the proportion of chaparral and oak-woodland habitat available for wild bees within 1–2 km of a farm can stabilize and enhance the provision of pollination services. The presence of such ecosystem elements functions almost like an insurance policy for farmers.
Buffer zones
Coastal and estuarine ecosystems act as buffer zones against natural hazards and environmental disturbances, such as floods, cyclones, tidal surges and storms. The role they play is to "[absorb] a portion of the impact and thus [lessen] its effect on the land". Wetlands (which include saltwater swamps, salt marshes, ...) and the vegetation it supports – trees, root mats, etc. – retain large amounts of water (surface water, snowmelt, rain, groundwater) and then slowly releases them back, decreasing the likeliness of floods. Mangrove forests protect coastal shorelines from tidal erosion or erosion by currents; a process that was studied after the 1999 cyclone that hit India. Villages that were surrounded with mangrove forests encountered less damages than other villages that were not protected by mangroves.
Supporting services
Supporting services are the services that allow for the other ecosystem services to be present. They have indirect impacts on humans that last over a long period of time. Several services can be considered as being both supporting services and regulating/cultural/provisioning services.
Supporting services include for example nutrient cycling, primary production, soil formation, habitat provision. These services make it possible for the ecosystems to continue providing services such as food supply, flood regulation, and water purification.
Nutrient cycling
Nutrient cycling is the movement of nutrients through an ecosystem by biotic and abiotic processes. The ocean is a vast storage pool for these nutrients, such as carbon, nitrogen and phosphorus. The nutrients are absorbed by the basic organisms of the marine food web and are thus transferred from one organism to the other and from one ecosystem to the other. Nutrients are recycled through the life cycle of organisms as they die and decompose, releasing the nutrients into the neighboring environment. "The service of nutrient cycling eventually impacts all other ecosystem services as all living things require a constant supply of nutrients to survive".
Primary production
Primary production refers to the production of organic matter, i.e., chemically bound energy, through processes such as photosynthesis and chemosynthesis. The organic matter produced by primary producers forms the basis of all food webs. Further, it generates oxygen (O2), a molecule necessary to sustain animals and humans. On average, a human consumes about 550 liter of oxygen per day, whereas plants produce 1,5 liter of oxygen per 10 grams of growth.
Cultural services
Cultural services relate to the non-material world, as they benefit the benefit recreational, aesthetic, cognitive and spiritual activities, which are not easily quantifiable in monetary terms. They include:
cultural (including use of nature as motif in books, film, painting, folklore, national symbols, advertising, etc.)
spiritual and historical (including use of nature for religious or heritage value or natural)
recreational experiences (including ecotourism, outdoor sports, and recreation)
science and education (including use of natural systems for school excursions, and scientific discovery)
therapeutic (including eco-therapy, social forestry and animal assisted therapy)
As of 2012, there was a discussion as to how the concept of cultural ecosystem services could be operationalized, how landscape aesthetics, cultural heritage, outdoor recreation, and spiritual significance to define can fit into the ecosystem services approach. who vote for models that explicitly link ecological structures and functions with cultural values and benefits. Likewise, there has been a fundamental critique of the concept of cultural ecosystem services that builds on three arguments:
Pivotal cultural values attaching to the natural/cultivated environment rely on an area's unique character that cannot be addressed by methods that use universal scientific parameters to determine ecological structures and functions.
If a natural/cultivated environment has symbolic meanings and cultural values the object of these values are not ecosystems but shaped phenomena like mountains, lakes, forests, and, mainly, symbolic landscapes.
Cultural values do result not from properties produced by ecosystems but are the product of a specific way of seeing within the given cultural framework of symbolic experience.
The Common International Classification of Ecosystem Services (CICES) is a classification scheme developed to accounting systems (like National counts etc.), in order to avoid double-counting of Supporting Services with others Provisioning and Regulating Services.
Recreation and tourism
Sea sports are very popular among coastal populations: surfing, snorkeling, whale watching, kayaking, recreational fishing ... a lot of tourists also travel to resorts close to the sea or rivers or lakes to be able to experience these activities, and relax near the water. The United Nations Sustainable Development Goal 14 also has targets aimed at enhancing the use of ecosystem services for sustainable tourism especially in Small Island Developing States.
Estuarine and coastal ecosystem services
Estuarine and marine coastal ecosystems are both marine ecosystems. Together, these ecosystems perform the four categories of ecosystem services in a variety of ways: The provisioning services include forest products, marine products, fresh water, raw materials, biochemical and genetic resources. Regulating services include carbon sequestration (contributing to climate change mitigation) as well as waste treatment and disease regulation and buffer zones. Supporting services of coastal ecosystems include nutrient cycling, biologically mediated habitats and primary production. Cultural services of coastal ecosystems include inspirational aspects, recreation and tourism, science and education.
Coasts and their adjacent areas on and offshore are an important part of a local ecosystem. The mixture of fresh water and salt water (brackish water) in estuaries provides many nutrients for marine life. Salt marshes, mangroves and beaches also support a diversity of plants, animals and insects crucial to the food chain. The high level of biodiversity creates a high level of biological activity, which has attracted human activity for thousands of years. Coasts also create essential material for organisms to live by, including estuaries, wetland, seagrass, coral reefs, and mangroves. Coasts provide habitats for migratory birds, sea turtles, marine mammals, and coral reefs.
Economics
There are questions regarding the environmental and economic values of ecosystem services. Some people may be unaware of the environment in general and humanity's interrelatedness with the natural environment, which may cause misconceptions. Although environmental awareness is rapidly improving in our contemporary world, ecosystem capital and its flow are still poorly understood, threats continue to impose, and we suffer from the so-called 'tragedy of the commons'. Many efforts to inform decision-makers of current versus future costs and benefits now involve organizing and translating scientific knowledge to economics, which articulate the consequences of our choices in comparable units of impact on human well-being. An especially challenging aspect of this process is that interpreting ecological information collected from one spatial-temporal scale does not necessarily mean it can be applied at another; understanding the dynamics of ecological processes relative to ecosystem services is essential in aiding economic decisions. Weighting factors such as a service's irreplaceability or bundled services can also allocate economic value such that goal attainment becomes more efficient.
The economic valuation of ecosystem services also involves social communication and information, areas that remain particularly challenging and are the focus of many researchers. In general, the idea is that although individuals make decisions for any variety of reasons, trends reveal the aggregated preferences of a society, from which the economic value of services can be inferred and assigned. The six major methods for valuing ecosystem services in monetary terms are:
Avoided cost: Services allow society to avoid costs that would have been incurred in the absence of those services (e.g. waste treatment by wetland habitats avoids health costs)
Replacement cost: Services could be replaced with human-made systems (e.g. restoration of the Catskill Watershed cost less than the construction of a water purification plant)
Factor income: Services provide for the enhancement of incomes (e.g. improved water quality increases the commercial take of a fishery and improves the income of fishers)
Travel cost: Service demand may require travel, whose costs can reflect the implied value of the service (e.g. value of ecotourism experience is at least what a visitor is willing to pay to get there)
Hedonic pricing: Service demand may be reflected in the prices people will pay for associated goods (e.g. coastal housing prices exceed that of inland homes)
Contingent valuation: Service demand may be elicited by posing hypothetical scenarios that involve some valuation of alternatives (e.g. visitors willing to pay for increased access to national parks)
A peer-reviewed study published in 1997 estimated the value of the world's ecosystem services and natural capital to be between US$16 and $54 trillion per year, with an average of US$33 trillion per year. However, Salles (2011) indicated 'The total value of biodiversity is infinite, so having debate about what is the total value of nature is actually pointless because we can't live without it'.
As of 2012, many companies were not fully aware of the extent of their dependence and impact on ecosystems and the possible ramifications. Likewise, environmental management systems and environmental due diligence tools are more suited to handle "traditional" issues of pollution and natural resource consumption. Most focus on environmental impacts, not dependence. Several tools and methodologies can help the private sector value and assess ecosystem services, including Our Ecosystem, the 2008 Corporate Ecosystem Services Review, the Artificial Intelligence for Environment & Sustainability (ARIES) project from 2007, the Natural Value Initiative (2012) and InVEST (Integrated Valuation of Ecosystem Services & Tradeoffs, 2012)
To provide an example of a cost comparison: The land of the United States Department of Defense is said to provide substantial ecosystem services to local communities, including benefits to carbon storage, resiliency to climate, and endangered species habitat. As of 2020, the Eglin Air Force Base is said to provide about $110 million in ecosystem services per year, $40 million more than if no base was present.
Payments
Management and policy
Although monetary pricing continues with respect to the valuation of ecosystem services, the challenges in policy implementation and management are significant and considerable. The administration of common pool resources has been a subject of extensive academic pursuit. From defining the problems to finding solutions that can be applied in practical and sustainable ways, there is much to overcome. Considering options must balance present and future human needs, and decision-makers must frequently work from valid but incomplete information. Existing legal policies are often considered insufficient since they typically pertain to human health-based standards that are mismatched with necessary means to protect ecosystem health and services. In 2000, to improve the information available, the implementation of an Ecosystem Services Framework has been suggested (ESF), which integrates the biophysical and socio-economic dimensions of protecting the environment and is designed to guide institutions through multidisciplinary information and jargon, helping to direct strategic choices.
As of 2005 Local to regional collective management efforts were considered appropriate for services like crop pollination or resources like water. Another approach that has become increasingly popular during the 1990s is the marketing of ecosystem services protection. Payment and trading of services is an emerging worldwide small-scale solution where one can acquire credits for activities such as sponsoring the protection of carbon sequestration sources or the restoration of ecosystem service providers. In some cases, banks for handling such credits have been established and conservation companies have even gone public on stock exchanges, defining an evermore parallel link with economic endeavors and opportunities for tying into social perceptions. However, crucial for implementation are clearly defined land rights, which are often lacking in many developing countries. In particular, many forest-rich developing countries suffering deforestation experience conflict between different forest stakeholders. In addition, concerns for such global transactions include inconsistent compensation for services or resources sacrificed elsewhere and misconceived warrants for irresponsible use. As of 2001, another approach focused on protecting ecosystem service biodiversity hotspots. Recognition that the conservation of many ecosystem services aligns with more traditional conservation goals (i.e. biodiversity) has led to the suggested merging of objectives for maximizing their mutual success. This may be particularly strategic when employing networks that permit the flow of services across landscapes, and might also facilitate securing the financial means to protect services through a diversification of investors.
For example, as of 2013, there had been interest in the valuation of ecosystem services provided by shellfish production and restoration. A keystone species, low in the food chain, bivalve shellfish such as oysters support a complex community of species by performing a number of functions essential to the diverse array of species that surround them. There is also increasing recognition that some shellfish species may impact or control many ecological processes; so much so that they are included on the list of "ecosystem engineers"—organisms that physically, biologically or chemically modify the environment around them in ways that influence the health of other organisms. Many of the ecological functions and processes performed or affected by shellfish contribute to human well-being by providing a stream of valuable ecosystem services over time by filtering out particulate materials and potentially mitigating water quality issues by controlling excess nutrients in the water.
As of 2018, the concept of ecosystem services had not been properly implemented into international and regional legislation yet.
Notwithstanding, the United Nations Sustainable Development Goal 15 has a target to ensure the conservation, restoration, and sustainable use of ecosystem services.
An estimated $125 trillion to $140 trillion is added to the economy each year by all ecosystem services. However, many of these services are at risk due to climate and other anthropogenic impacts. Climate-driven shifts in biome ranges is expected to cause a 9% decline in ecosystem services on average at global scale by 2100
Ecosystem-based adaptation (EbA)
Land use change decisions
Ecosystem services decisions require making complex choices at the intersection of ecology, technology, society, and the economy. The process of making ecosystem services decisions must consider the interaction of many types of information, honor all stakeholder viewpoints, including regulatory agencies, proposal proponents, decision makers, residents, NGOs, and measure the impacts on all four parts of the intersection. These decisions are usually spatial, always multi-objective, and based on uncertain data, models, and estimates. Often it is the combination of the best science combined with the stakeholder values, estimates and opinions that drive the process.
One analytical study modeled the stakeholders as agents to support water resource management decisions in the Middle Rio Grande basin of New Mexico. This study focused on modeling the stakeholder inputs across a spatial decision, but ignored uncertainty. Another study used Monte Carlo methods to exercise econometric models of landowner decisions in a study of the effects of land-use change. Here the stakeholder inputs were modeled as random effects to reflect the uncertainty. A third study used a Bayesian decision support system to both model the uncertainty in the scientific information Bayes Nets and to assist collecting and fusing the input from stakeholders. This study was about siting wave energy devices off the Oregon Coast, but presents a general method for managing uncertain spatial science and stakeholder information in a decision making environment. Remote sensing data and analyses can be used to assess the health and extent of land cover classes that provide ecosystem services, which aids in planning, management, monitoring of stakeholders' actions, and communication between stakeholders.
In Baltic countries scientists, nature conservationists and local authorities are implementing integrated planning approach for grassland ecosystems. They are developing an integrated planning tool based on GIS (geographic information system) technology and put online that will help for planners to choose the best grassland management solution for concrete grassland. It will look holistically at the processes in the countryside and help to find best grassland management solutions by taking into account both natural and socioeconomic factors of the particular site.
History
While the notion of human dependence on Earth's ecosystems reaches to the start of Homo sapiens existence, the term 'natural capital' was first coined by E. F. Schumacher in 1973 in his book Small is Beautiful. Recognition of how ecosystems could provide complex services to humankind date back to at least Plato (c. 400 BC) who understood that deforestation could lead to soil erosion and the drying of springs. Modern ideas of ecosystem services probably began when Marsh challenged in 1864 the idea that Earth's natural resources are unbounded by pointing out changes in soil fertility in the Mediterranean. It was not until the late 1940s that three key authors—Henry Fairfield Osborn, Jr, William Vogt, and Aldo Leopold—promoted recognition of human dependence on the environment.
In 1956, Paul Sears drew attention to the critical role of the ecosystem in processing wastes and recycling nutrients. In 1970, Paul Ehrlich and Rosa Weigert called attention to "ecological systems" in their environmental science textbook and "the most subtle and dangerous threat to man's existence ... the potential destruction, by man's own activities, of those ecological systems upon which the very existence of the human species depends".
The term environmental services was introduced in a 1970 report of the Study of Critical Environmental Problems, which listed services including insect pollination, fisheries, climate regulation and flood control. In following years, variations of the term were used, but eventually 'ecosystem services' became the standard in scientific literature.
The ecosystem services concept has continued to expand and includes socio-economic and conservation objectives.
| Biology and health sciences | Ecology | Biology |
20786042 | https://en.wikipedia.org/wiki/Cybernetics | Cybernetics | Cybernetics is the transdisciplinary study of circular causal processes such as feedback and recursion, where the effects of a system's actions (its outputs) return as inputs to that system, influencing subsequent action. It is concerned with general principles that are relevant across multiple contexts, including in ecological, technological, economic, biological, cognitive and social systems and also in practical activities such as designing, learning, and managing. Cybernetics' transdisciplinary character has meant that it intersects with a number of other fields, leading to it having both wide influence and diverse interpretations.
The field is named after an example of circular causal feedback—that of steering a ship (the ancient Greek κυβερνήτης (kybernḗtēs) refers to the person who steers a ship). In steering a ship, the position of the rudder is adjusted in continual response to the effect it is observed as having, forming a feedback loop through which a steady course can be maintained in a changing environment, responding to disturbances from cross winds and tide.
Cybernetics has its origins in exchanges between numerous disciplines during the 1940s. Initial developments were consolidated through meetings such as the Macy Conferences and the Ratio Club. Early focuses included purposeful behaviour, neural networks, heterarchy, information theory, and self-organising systems. As cybernetics developed, it became broader in scope to include work in design, family therapy, management and organisation, pedagogy, sociology, the creative arts and the counterculture.
Definitions
Cybernetics has been defined in a variety of ways, reflecting "the richness of its conceptual base." One of the best known definitions is that of the American scientist Norbert Wiener, who characterised cybernetics as concerned with "control and communication in the animal and the machine." Another early definition is that of the Macy cybernetics conferences, where cybernetics was understood as the study of "circular causal and feedback mechanisms in biological and social systems." Margaret Mead emphasised the role of cybernetics as "a form of cross-disciplinary thought which made it possible for members of many disciplines to communicate with each other easily in a language which all could understand."
Other definitions include: "the art of governing or the science of government" (André-Marie Ampère); "the art of steersmanship" (Ross Ashby); "the study of systems of any nature which are capable of receiving, storing, and processing information so as to use it for control" (Andrey Kolmogorov); and "a branch of mathematics dealing with problems of control, recursiveness, and information, focuses on forms and the patterns that connect" (Gregory Bateson).
Etymology
The Ancient Greek term κυβερνητικός (kubernētikos, '(good at) steering') appears in Plato's Republic and Alcibiades, where the metaphor of a steersman is used to signify the governance of people. The French word cybernétique was also used in 1834 by the physicist André-Marie Ampère to denote the sciences of government in his classification system of human knowledge.
According to Norbert Wiener, the word cybernetics was coined by a research group involving himself and Arturo Rosenblueth in the summer of 1947. It has been attested in print since at least 1948 through Wiener's book Cybernetics: Or Control and Communication in the Animal and the Machine. In the book, Wiener states:
Moreover, Wiener explains, the term was chosen to recognize James Clerk Maxwell's 1868 publication on feedback mechanisms involving governors, noting that the term governor is also derived from κυβερνήτης (kubernḗtēs) via a Latin corruption gubernator. Finally, Wiener motivates the choice by steering engines of a ship being "one of the earliest and best-developed forms of feedback mechanisms".
History
First wave
The initial focus of cybernetics was on parallels between regulatory feedback processes in biological and technological systems. Two foundational articles were published in 1943: "Behavior, Purpose and Teleology" by Arturo Rosenblueth, Norbert Wiener, and Julian Bigelowbased on the research on living organisms that Rosenblueth did in Mexicoand the paper "A Logical Calculus of the Ideas Immanent in Nervous Activity" by Warren McCulloch and Walter Pitts. The foundations of cybernetics were then developed through a series of transdisciplinary conferences funded by the Josiah Macy, Jr. Foundation, between 1946 and 1953. The conferences were chaired by McCulloch and had participants included Ross Ashby, Gregory Bateson, Heinz von Foerster, Margaret Mead, John von Neumann, and Norbert Wiener. In the UK, similar focuses were explored by the Ratio Club, an informal dining club of young psychiatrists, psychologists, physiologists, mathematicians and engineers that met between 1949 and 1958. Wiener introduced the neologism cybernetics to denote the study of "teleological mechanisms" and popularized it through the book Cybernetics: Or Control and Communication in the Animal and the Machine.
During the 1950s, cybernetics was developed as a primarily technical discipline, such as in Qian Xuesen's 1954 "Engineering Cybernetics". In the Soviet Union, Cybernetics was initially considered with suspicion but became accepted from the mid to late 1950s.
By the 1960s and 1970s, however, cybernetics' transdisciplinarity fragmented, with technical focuses separating into separate fields. Artificial intelligence (AI) was founded as a distinct discipline at the Dartmouth workshop in 1956, differentiating itself from the broader cybernetics field. After some uneasy coexistence, AI gained funding and prominence. Consequently, cybernetic sciences such as the study of artificial neural networks were downplayed. Similarly, computer science became defined as a distinct academic discipline in the 1950s and early 1960s.
Second wave
The second wave of cybernetics came to prominence from the 1960s onwards, with its focus inflecting away from technology toward social, ecological, and philosophical concerns. It was still grounded in biology, notably Maturana and Varela's autopoiesis, and built on earlier work on self-organising systems and the presence of anthropologists Mead and Bateson in the Macy meetings. The Biological Computer Laboratory, founded in 1958 and active until the mid-1970s under the direction of Heinz von Foerster at the University of Illinois at Urbana–Champaign, was a major incubator of this trend in cybernetics research.
Focuses of the second wave of cybernetics included management cybernetics, such as Stafford Beer's biologically inspired viable system model; work in family therapy, drawing on Bateson; social systems, such as in the work of Niklas Luhmann; epistemology and pedagogy, such as in the development of radical constructivism. Cybernetics' core theme of circular causality was developed beyond goal-oriented processes to concerns with reflexivity and recursion. This was especially so in the development of second-order cybernetics (or the cybernetics of cybernetics), developed and promoted by Heinz von Foerster, which focused on questions of observation, cognition, epistemology, and ethics.
The 1960s onwards also saw cybernetics begin to develop exchanges with the creative arts, design, and architecture, notably with the Cybernetic Serendipity exhibition (ICA, London, 1968), curated by Jasia Reichardt, and the unrealised Fun Palace project (London, unrealised, 1964 onwards), where Gordon Pask was consultant to architect Cedric Price and theatre director Joan Littlewood.
Third wave
From the 1990s onwards, there has been a renewed interest in cybernetics from a number of directions. Early cybernetic work on artificial neural networks has been returned to as a paradigm in machine learning and artificial intelligence. The entanglements of society with emerging technologies has led to exchanges with feminist technoscience and posthumanism. Re-examinations of cybernetics' history have seen science studies scholars emphasising cybernetics' unusual qualities as a science, such as its "performative ontology". Practical design disciplines have drawn on cybernetics for theoretical underpinning and transdisciplinary connections. Emerging topics include how cybernetics' engagements with social, human, and ecological contexts might come together with its earlier technological focus, whether as a critical discourse or a "new branch of engineering".
Key concepts and theories
The central theme in cybernetics is feedback. Feedback is a process where the observed outcomes of actions are taken as inputs for further action in ways that support the pursuit, maintenance, or disruption of particular conditions, forming a circular causal relationship. In steering a ship, the helmsperson maintains a steady course in a changing environment by adjusting their steering in continual response to the effect it is observed as having.
Other examples of circular causal feedback include: technological devices such as the thermostat, where the action of a heater responds to measured changes in temperature regulating the temperature of the room within a set range, and the centrifugal governor of a steam engine, which regulates the engine speed; biological examples such as the coordination of volitional movement through the nervous system and the homeostatic processes that regulate variables such as blood sugar; and processes of social interaction such as conversation.
Negative feedback processes are those that maintain particular conditions by reducing (hence 'negative') the difference from a desired state, such as where a thermostat turns on a heater when it is too cold and turns a heater off when it is too hot. Positive feedback processes increase (hence 'positive') the difference from a desired state. An example of positive feedback is when a microphone picks up the sound that it is producing through a speaker, which is then played through the speaker, and so on.
In addition to feedback, cybernetics is concerned with other forms of circular processes including: feedforward, recursion, and reflexivity.
Other key concepts and theories in cybernetics include:
Autopoiesis
Black box
Conversation theory
Double bind theory: Double binds are patterns created in interaction between two or more parties in ongoing relationships where there is a contradiction between messages at different logical levels that creates a situation with emotional threat but no possibility of withdrawal from the situation and no way to articulate the problem. The theory was first described by Gregory Bateson and colleagues in the 1950s with regard to the origins of schizophrenia, but it is also characteristic of many other social contexts.
Experimental epistemology
Good regulator theorem
Heterarchy
Perceptual control theory: A model of behavior based on the properties of negative feedback (cybernetic) control loops. A key insight of PCT is that the controlled variable is not the output of the system (the behavioral actions), but its input, "perception". The theory came to be known as "perceptual control theory" to distinguish from those control theorists that assert or assume that it is the system's output that is controlled. Method of levels is an approach to psychotherapy based on perceptual control theory where the therapist aims to help the patient shift their awareness to higher levels of perception in order to resolve conflicts and allow reorganization to take place.
Radical constructivism
Second-order cybernetics: Also known as the cybernetics of cybernetics, second-order cybernetics is the recursive application of cybernetics to itself and the practice of cybernetics according to such a critique.
Schismogenesis
Self-organisation
Social systems theory
Syntegrity
Variety and Requisite Variety
Viable system model
Related fields and applications
Cybernetics' central concept of circular causality is of wide applicability, leading to diverse applications and relations with other fields. Many of the initial applications of cybernetics focused on engineering, biology, and exchanges between the two, such as medical cybernetics and robotics and topics such as neural networks, heterarchy. In the social and behavioral sciences, cybernetics has included and influenced work in anthropology, sociology, economics, family therapy, cognitive science, and psychology.
As cybernetics has developed, it broadened in scope to include work in management, design, pedagogy, and the creative arts, while also developing exchanges with constructivist philosophies, counter-cultural movements, and media studies. The development of management cybernetics has led to a variety of applications, notably to the national economy of Chile under the Allende government in Project Cybersyn. In design, cybernetics has been influential on interactive architecture, human-computer interaction, design research, and the development of systemic design and metadesign practices.
Cybernetics is often understood within the context of systems science, systems theory, and systems thinking. Systems approaches influenced by cybernetics include critical systems thinking, which incorporates the viable system model; systemic design; and system dynamics, which is based on the concept of causal feedback loops.
Many fields trace their origins in whole or part to work carried out in cybernetics, or were partially absorbed into cybernetics when it was developed. These include artificial intelligence, bionics, cognitive science, control theory, complexity science, computer science, information theory and robotics. Some aspects of modern artificial intelligence, particularly the social machine, are often described in cybernetic terms.
Journals and societies
Academic journals with focuses in cybernetics include:
Constructivist Foundations
Cybernetics and Human Knowing
Cybernetics and Systems
Enacting Cybernetics. An open access journal published by the Cybernetics Society and hosted by Ubiquity Press.
Biological Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics: Systems
IEEE Transactions on Human-Machine Systems
IEEE Transactions on Cybernetics
IEEE Transactions on Computational Social Systems
Kybernetes
Academic societies primarily concerned with cybernetics or aspects of it include:
American Society for Cybernetics (ASC), founded in 1964
British Cybernetics Society (CybSoc)
: The Metaphorum group was set up in 2003 to develop Stafford Beer's legacy in Organizational Cybernetics. The Metaphorum Group was born in a Syntegration in 2003 and have every year after developed a Conference on issues related to Organizational Cybernetics' theory and practice.
IEEE Systems, Man, and Cybernetics Society
RC51 Sociocybernetics: RC51 is a research committee of the International Sociological Association promoting the development of (socio)cybernetic theory and research within the social sciences.
SCiO (Systems and Complexity in Organisation) is a community of systems practitioners who believe that traditional approaches to running organisations are no longer capable of dealing with the complexity and turbulence faced by organisations today and are responsible for many of the problems we see today. SCiO delivers an apprenticeship on masters level and a certification in systems practice.
| Technology | Biotechnology | null |
2571938 | https://en.wikipedia.org/wiki/Jacob%27s%20staff | Jacob's staff | The term Jacob's staff is used to refer to several things, also known as cross-staff, a ballastella, a fore-staff, a ballestilla, or a balestilha. In its most basic form, a Jacob's staff is a stick or pole with length markings; most staffs are much more complicated than that, and usually contain a number of measurement and stabilization features. The two most frequent uses are:
in astronomy and navigation for a simple device to measure angles, later replaced by the more precise sextants;
in surveying (and scientific fields that use surveying techniques, such as geology and ecology) for a vertical rod that penetrates or sits on the ground and supports a compass or other instrument.
The simplest use of a Jacob's staff is to make qualitative judgements of the height and angle of an object relative to the user of the staff.
In astronomy and navigation
In navigation the instrument is also called a cross-staff and was used to determine angles, for instance the angle between the horizon and Polaris or the sun to determine a vessel's latitude, or the angle between the top and bottom of an object to determine the distance to said object if its height is known, or the height of the object if its distance is known, or the horizontal angle between two visible locations to determine one's point on a map.
The Jacob's staff, when used for astronomical observations, was also referred to as a radius astronomicus. With the demise of the cross-staff, in the modern era the name "Jacob's staff" is applied primarily to the device used to provide support for surveyor's instruments.
Etymology
The origin of the name of the instrument is not certain. Some refer to the Biblical patriarch Jacob, specifically in the Book of Genesis (). It may also take its name after its resemblance to Orion, referred to by the name of Jacob on some medieval star charts. Another possible source is the Pilgrim's staff, the symbol of St James (Jacobus in Latin). The name cross staff simply comes from its cruciform shape.
History
The original Jacob's staff was developed as a single pole device, in the 14th century, that was used in making astronomical measurements. It was first described by the French-Jewish mathematician Levi ben Gerson of Provence, in his "Book of the Wars of the Lord" (translated in Latin as well as Hebrew). He used a Hebrew name for the staff that translates to "Revealer of Profundities", while the term "Jacob's staff" was used by his Christian contemporaries. Its invention was likely due to fellow French-Jewish astronomer Jacob ben Makir, who also lived in Provence in the same period. Attribution to 15th century Austrian astronomer Georg Purbach is less likely, because Purbach was not born until 1423. (Such attributions may refer to a different instrument with the same name.) Its origins may be traced to the Chaldeans around 400 BCE.
Although it has become quite accepted that ben Gerson first described Jacob's staff, the British Sinologist Joseph Needham theorizes that the Song dynasty Chinese scientist Shen Kuo (1031–1095), in his Dream Pool Essays of 1088, described a Jacob's staff. Shen was an antiquarian interested in ancient objects; after he unearthed an ancient crossbow-like device from a home's garden in Jiangsu, he realized it had a sight with a graduated scale that could be used to measure the heights of distant mountains, likening it to how mathematicians measure heights by using right-angle triangles. He wrote that when one viewed the whole breadth of a mountain with it, the distance on the instrument was long; when viewing a small part of the mountainside, the distance was short; this, he wrote, was due to the cross piece that had to be pushed further away from the eye, while the graduation started from the further end. Needham does not mention any practical application of this observation.
During the medieval European Renaissance, the Dutch mathematician and surveyor Adriaan Metius developed his own Jacob's staff; Dutch mathematician Gemma Frisius made improvements to this instrument. In the 15th century, the German mathematician Johannes Müller (called Regiomontanus) made the instrument popular in geodesic and astronomical measurements.
Construction
In the original form of the cross-staff, the pole or main staff was marked with graduations for length. The cross-piece (BC in the drawing to the right), also called the transom or transversal, slides up and down on the main staff. On older instruments, the ends of the transom were cut straight across. Newer instruments had brass fittings on the ends, with holes in the brass for observation. (In marine archaeology, these fittings are often the only components of a cross-staff that survive.)
It was common to provide several transoms, each with a different range of angles it would measure; three transoms were common. In later instruments, separate transoms were switched in favour of just one with pegs to indicate the ends. These pegs were mounted in one of several pairs of holes symmetrically located on either side of the transom. This provided the same capability with fewer parts. The transom on Frisius' version had a sliding vane on the transom as an end point.
Usage
The user places one end of the main staff against their cheek, just below the eye. By sighting the horizon at the end of the lower part of the transom (or through the hole in the brass fitting) [B], then adjusting the cross arm on the main arm until the sun is at the other end of the transom [C], the altitude can be determined by reading the position of the cross arm on the scale on the main staff. This value was converted to an angular measurement by looking up the value in a table.
Cross-staff for navigation
The original version was not reported to be used at sea, until the Age of Discoveries. Its use was reported by João de Lisboa in his Treatise on the Nautical Needle of 1514. Johannes Werner suggested the cross-staff be used at sea in 1514 and improved instruments were introduced for use in navigation. John Dee introduced it to England in the 1550s. In the improved versions, the rod was graduated directly in degrees. This variant of the instrument is not correctly termed a Jacob's staff but is a cross-staff.
The cross-staff was difficult to use. In order to get consistent results, the observer had to position the end of the pole precisely against his cheek. He had to observe the horizon and a star in two different directions while not moving the instrument when he shifted his gaze from one to the other. In addition, observations of the sun required the navigator to look directly at the sun. This could be a uncomfortable exercise and made it difficult to obtain an accurate altitude for the sun. Mariners took to mounting smoked-glass to the ends of the transoms to reduce the glare of the sun.
As a navigational tool, this instrument was eventually replaced, first by the backstaff or quadrant, neither of which required the user to stare directly into the sun, and later by the octant and the sextant. Perhaps influenced by the backstaff, some navigators modified the cross-staff to operate more like the former. Vanes were added to the ends of the longest cross-piece and another to the end of the main staff. The instrument was reversed so that the shadow of the upper vane on the cross piece fell on the vane at the end of the staff. The navigator held the instrument so that he would view the horizon lined up with the lower vane and the vane at the end of the staff. By aligning the horizon with the shadow of the sun on the vane at the end of the staff, the elevation of the sun could be determined. This actually increased the accuracy of the instrument, as the navigator no longer had to position the end of the staff precisely on his cheek.
Another variant of the cross-staff was a spiegelboog, invented in 1660 by the Dutchman, Joost van Breen.
Ultimately, the cross-staff could not compete with the backstaff in many countries. In terms of handling, the backstaff was found to be more easy to use. However, it has been proven by several authors that in terms of accuracy, the cross-staff was superior to the backstaff. Backstaves were no longer allowed on board Dutch East India Company vessels as per 1731, with octants not permitted until 1748.
In surveying
In surveying, the term jacob staff refers to a monopod, a single straight rod or staff made of nonferrous material, pointed and metal-clad at the bottom for penetrating the ground. It also has a screw base and occasionally a ball joint on the mount, and is used for supporting a compass, transit, or other instrument.
The term cross-staff may also have a different meaning in the history of surveying. While the astronomical cross-staff was used in surveying for measuring angles, two other devices referred to as a cross-staff were also employed.
Cross-head, cross-sight, surveyor's cross or cross - a drum or box shaped device mounted on a pole. It had two sets of mutually perpendicular sights. This device was used by surveyors to measure offsets. Sophisticated versions had a compass and spirit levels on the top. The French versions were frequently eight-sided rather than round.
Optical square - an improved version of the cross-head, the optical square used two silvered mirrors at 45° to each other. This permitted the surveyor to see along both axes of the instrument at once.
In the past, many surveyor's instruments were used on a Jacob's staff. These include:
Cross-head, cross-sight, surveyor's cross or cross
Graphometer
Circumferentor
Holland circle
Miner's dial
Optical square
Surveyor's sextant
Surveyor's target
Abney level
Some devices, such as the modern optical targets for laser-based surveying, are still in common use on a Jacob's staff.
In geology
In geology, the Jacob's staff is mainly used to measure stratigraphic thicknesses in the field, especially when bedding is not visible or unclear (e.g., covered outcrop) and when due to the configuration of an outcrop, the apparent and real thicknesses of beds diverge therefore making the use of a tape measure difficult. There is a certain level of error to be expected when using this tool, due to the lack of an exact reference mean for measuring stratigraphic thickness. High-precision designs include a laser able to slide vertically along the staff and to rotate on a plane parallel to bedding.
| Technology | Navigation | null |
2575985 | https://en.wikipedia.org/wiki/Herpetic%20whitlow | Herpetic whitlow | A herpetic whitlow is a herpes lesion (whitlow), typically on a finger or thumb, caused by the herpes simplex virus (HSV). Occasionally infection occurs on the toes or on the nail cuticle. Herpes whitlow can be caused by infection by HSV-1 or HSV-2. HSV-1 whitlow is often contracted by health care workers that come in contact with the virus; it is most commonly contracted by dental workers and medical workers exposed to oral secretions. It is also often observed in thumb-sucking children with primary HSV-1 oral infection (autoinoculation) prior to seroconversion, and in adults aged 20 to 30 following contact with HSV-2-infected genitals.
Symptoms and signs
Symptoms of herpetic whitlow include swelling, reddening, and tenderness of the infected part. This may be accompanied by fever and swollen lymph nodes. Small, clear vesicles initially form individually, then merge and become cloudy, unlike in bacterial whitlow when there is pus. Associated pain often seems largely relative to the physical symptoms. The herpes whitlow lesion usually heals in two to three weeks. It may reside in axillary sensory ganglia to cause recurrent herpetic lesions on that arm or digits. Blistering can occur in severe cases.
Causes
In children the primary source of infection is the orofacial area, and it is commonly inferred that the virus (in this case commonly HSV-1) is transferred by the cutting, chewing or sucking of fingernail or thumbnail.
In adults, it is more common for the primary source to be the genital region, with a corresponding preponderance of HSV-2. It is also seen in adult health care workers such as dentists because of increased exposure to the herpes virus.
Contact sports are also a potential source of infection with herpetic whitlows.
Treatment
Although it is a self-limited illness, oral or intravenous antiviral treatments, particularly acyclovir, have been used in the management of immunocompromised or severely infected patients. It is usually given when the condition fails to improve on its own. Topical acyclovir has not been shown to be effective in management of herpetic whitlow. Famciclovir has been demonstrated to effectively treat and prevent recurrent episodes. Lancing or surgically debriding the lesion may make it worse by causing a superinfection or encephalitis.
Prognosis
Even though the disease is self-limiting, as with many herpes infections, the virus lies dormant in the peripheral nervous system. The disease recurs in about 20–50% of people. The most severe infection is usually the first one, with recurrences subsequently getting milder. The lesions the disease makes will either dry out, or burst, followed by healing. If the infected area is not touched, scars usually do not occur. The immunocompromised may have a hard time recovering, and have more frequent recurrences.
| Biology and health sciences | Viral diseases | Health |
2576307 | https://en.wikipedia.org/wiki/Oxide%20mineral | Oxide mineral | The oxide mineral class includes those minerals in which the oxide anion (O2−) is bonded to one or more metal alloys. The hydroxide-bearing minerals are typically included in the oxide class. Minerals with complex anion groups such as the silicates, sulfates, carbonates and phosphates are classed separately.
Simple oxides
XO form
Periclase group
Periclase
Manganosite
Zincite group
Zincite
Bromellite
Tenorite
Litharge
form
Cuprite
Ice
form
Hematite group
Corundum
Hematite
Ilmenite
form
Rutile group
Rutile
Pyrolusite
Cassiterite
Baddeleyite
Uraninite
Thorianite
form
Spinel group
Spinel
Gahnite
Magnetite
Franklinite
Chromite
Chrysoberyl
Columbite
Hydroxide subgroup:
Brucite
Manganite
Romanèchite
Goethite group:
Diaspore
Goethite
Nickel–Strunz class 4: oxides
IMA-CNMNC proposes a new hierarchical scheme (Mills et al., 2009). This list uses it to modify the Nickel–Strunz classification (mindat.org, 10 ed, pending publication).
Abbreviations:
"*": discredited (IMA/CNMNC status)
"?": questionable/doubtful (IMA/CNMNC status)
"REE": Rare-earth element (Sc, Y, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu)
"PGE": Platinum-group element (Ru, Rh, Pd, Os, Ir, Pt)
03.C Aluminofluorides, 06 Borates, 08 Vanadates (04.H V[5,6] Vanadates), 09 Silicates:
Neso: insular (from )
Soro: grouping (from ; heap, mound (especially of corn))
Cyclo: ring
Ino: chain (from [genitive: ], fibre)
Phyllo: sheet (from )
Tekto: three-dimensional framework
Nickel–Strunz code scheme: NN.XY.##x
NN: Nickel–Strunz mineral class number
X: Nickel–Strunz mineral division letter
Y: Nickel–Strunz mineral family letter
##x: Nickel–Strunz mineral/group number, x add-on letter
Class: oxides
04.A Metal:Oxygen = 2.1 and 1:1
04.AA Cation:Anion (M:O) = 2:1 (and 1.8:1): 05 Ice, 10 Cuprite, 15 Paramelaconite
04.AB M:O = 1:1 (and up to 1:1.25); with small to medium-sized cations only: 05 Crednerite, 10 Tenorite; 15 Delafossite, 15 Mcconnellite; 20 Bromellite, 20 Zincite; 25 Lime, 25 Bunsenite, 25 Monteponite, 25 Manganosite, 25 Periclase, 25 Wüstite
04.AC M:O = 1:1 (and up to 1:1.25); with large cations (± smaller ones): 05 Swedenborgite; 10 Brownmillerite, 10 Srebrodolskite; 15 Montroydite, 20 Litharge, 20 Romarchite, 25 Massicot
04.B Metal:Oxygen = 3:4 and similar
04.BA With small and medium-sized cations: 05 Chrysoberyl, 10 Manganostibite
04.BB With only medium-sized cations: 05 Filipstadite, 05 Donathite?, 05 Gahnite, 05 Galaxite, 05 Hercynite, 05 Spinel, 05 Cochromite, 05 Chromite, 05 Magnesiochromite, 05 Manganochromite, 05 Nichromite, 05 Zincochromite, 05 Magnetite, 05 Cuprospinel, 05 Franklinite, 05 Jacobsite, 05 Magnesioferrite, 05 Trevorite, 05 Brunogeierite, 05 Coulsonite, 05 Magnesiocoulsonite, 05 Qandilite, 05 Ulvospinel, 05 Vuorelainenite; 10 Hydrohetaerolite, 10 Hausmannite, 10 Iwakiite, 10 Hetaerolite; 15 Maghemite, 20 Tegengrenite, 25 Xieite
04.BC With medium-sized and large cations: 05 Marokite, 10 Dmitryivanovite
04.BD With only large cations: 05 Minium
04.C Metal:Oxygen = 2:3, 3:5, and Similar
04.CB With medium-sized cations: 05 Tistarite, 05 Auroantimonate*, 05 Brizziite-VII, 05 Brizziite-III, 05 Corundum, 05 Eskolaite, 05 Hematite, 05 Karelianite, 05 Geikielite, 05 Ecandrewsite, 05 Ilmenite, 05 Pyrophanite, 05 Melanostibite, 05 Romanite*; 10 Bixbyite, 10 Avicennite; 15 Armalcolite, 15 Mongshanite*, 15 Pseudobrookite; 20 Magnesiohogbomite-6N6S, 20 Magnesiohogbomite-2N3S, 20 Magnesiohogbomite-2N2S, 20 Zincohogbomite-2N2S, 20 Ferrohogbomite-2N2S; 25 Pseudorutile, 25 Ilmenorutile; 30 Oxyvanite, 30 Berdesinskiite; 35 Olkhonskite, 35 Schreyerite; 40 Kamiokite, 40 Nolanite, 40 Rinmanite; 45 Stibioclaudetite, 45 Claudetite; 50 Arsenolite, 50 Senarmontite; 55 Valentinite, 60 Bismite, 65 Sphaerobismoite, 70 Sillenite, 75 Kyzylkumite
04.CC With large and medium-sized cations: 05 Chrombismite, 10 Freudenbergite, 15 Grossite, 20 Mayenite, 25 Yafsoanite; 30 Barioperovskite, 30 Lakargiite, 30 Natroniobite, 30 Latrappite, 30 Lueshite, 30 Perovskite; 35 Macedonite, 35 Isolueshite, 35 Loparite-(Ce), 35 Tausonite; 40 Crichtonite, 40 Dessauite, 40 Davidite-(Ce), 40 Davidite-(La), 40 Mathiasite, 40 Lindsleyite, 40 Landauite, 40 Loveringite, 40 Loveringite, 40 Cleusonite, 40 Gramaccioliite-(Y); 45 Hawthorneite, 45 Magnetoplumbite, 45 Haggertyite, 45 Batiferrite, 45 Hibonite, 45 Nezilovite, 45 Yimengite, 45 Diaoyudaoite, 45 Lindqvistite, 45 Plumboferrite; 50 Jeppeite, 55 Zenzenite, 60 Mengxianminite*
04.D Metal:Oxygen = 1:2 and similar
04.DA With small cations
(moved to -09- Subclass: tektosilicates)
04.DB With medium-sized cations; chains of edge-sharing octahedra: 05 Tripuhyite, 05 Tugarinovite, 05 Varlamoffite*, 05 Argutite, 05 Cassiterite, 05 Rutile, 05 Pyrolusite, 05 Plattnerite, 05 Squawcreekite?; 10 Bystromite, 10 Ordonezite, 10 Tapiolite-(Fe), 10 Tapiolite-(Mn), 10 Tapiolite*, 15a Paramontroseite, 15a Ramsdellite, 15b Akhtenskite, 15c Nsutite; 20 Scrutinyite; 25 Ixiolite, 25 Ishikawaite, 25 Srilankite, 25 Samarskite-(Y), 25 Samarskite-(Yb), 25 Yttrocolumbite-(Y); 30 Heftetjernite, 30 Wolframoixiolite*, 30 Krasnoselskite*, 30 Ferberite, 30 Hubnerite, 30 Sanmartinite, 30 Wolframite*; 35 Tantalite-(Mg), 35 Tantalite-(Fe), 35 Tantalite-(Mn), 35 Columbite-(Mg), 35 Columbite-(Fe), 35 Columbite-(Mn), 35 Qitianlingite; 40 Ferrowodginite, 40 Lithiotantite, 40 Lithiowodginite, 40 Tantalowodginite*, 40 Titanowodginite, 40 Wodginite, 40 Ferrotitanowodginite; 45 Tivanite, 50 Carmichaelite, 55 Alumotantite, 60 Biehlite
04.DC With medium-sized cations; sheets of edge-sharing octahedra: 05 Bahianite, 10 Simpsonite
04.DD With medium-sized cations; frameworks of edge-sharing octahedra: 05 Anatase, 10 Brookite
04.DE With medium-sized cations; with various polyhedra: 05 Downeyite, 10 Koragoite; 15 Koechlinite, 15 Russellite, 15 Tungstibite; 20 Tellurite, 25 Paratellurite; 30 Cervantite, 30 Bismutotantalite, 30 Bismutocolumbite, 30 Clinocervantite, 30 Stibiocolumbite, 30 Stibiotantalite; 35 IMA2007-058, 35 Baddeleyite
04.DF With large (± medium-sized) cations; dimers and trimers of edge-sharing octahedra: 05 Nioboaeschynite-(Y), 05 Aeschynite-(Ce), 05 Aeschynite-(Nd), 05 Aeschynite-(Y), 05 Nioboaeschynite-(Ce), 05 Nioboaeschynite-(Nd), 05 Tantalaeschynite-(Y), 05 Rynersonite, 05 Vigezzite, 10 Changbaiite, 15 Murataite
04.DG With large (± medium-sized) cations; chains of edge-sharing octahedra: 05 Euxenite-(Y), 05 Loranskite-(Y), 05 Polycrase-(Y), 05 Uranopolycrase, 05 Fersmite, 05 Kobeite-(Y), 05 Tanteuxenite-(Y), 05 Yttrocrasite-(Y); 10 Fergusonite-beta-(Nd), 10 Fergusonite-beta-(Y), 10 Fergusonite-beta-(Ce), 10 Yttrotantalite-(Y); 15 Foordite, 15 Thoreaulite; 20 Raspite
04.DH With large (± medium-sized) cations; sheets of edge-sharing octahedra:
IMA/CNMNC revised the Pyrochlore supergroup 2010 (04.DH.15 and 04.DH.20)
05 Brannerite, 05 Orthobrannerite, 05 Thorutite; 10 Kassite, 10 Lucasite-(Ce)
Pyrochlore group: Fluorcalciopyrochlore, Fluorkenopyrochlore, Fluornatropyrochlore, Fluorstrontiopyrochlore, Hydropyrochlore, Hydroxycalciopyrochlore, Kenoplumbopyrochlore, Oxycalciopyrochlore, Oxynatropyrochlore, Oxyplumbopyrochlore, Oxyyttropyrochlore-(Y)
Microlite group: Fluorcalciomicrolite, Fluornatromicrolite, Hydrokenomicrolite, Hydromicrolite, Hydroxykenomicrolite, Kenoplumbomicrolite, Oxycalciomicrolite, Oxystannomicrolite, Oxystibiomicrolite
Romeite group: Cuproromeite, Fluorcalcioromeite, Fluornatroromeite, Hydroxycalcioromeite, Oxycalcioromeite, Oxyplumboromeite, Stibiconite
Betafite group: Calciobetafite, Oxyuranobetafite
Elsmoreite group: Hydrokenoelsmoreite
25 Rosiaite; 30 Zirconolite-3O, 30 Zirconolite-3T, 30 Zirconolite-2M, 30 Zirconolite; 35 Liandratite, 35 Petscheckite; 40 Ingersonite, 45 Pittongite
Discredited minerals 04.DH.15: Bariomicrolite (of Hogarth 1977), Bariopyrochlore (of Hogarth 1977), Betafite (of Hogarth 1977), Bismutomicrolite (of Hogarth 1977), Ceriopyrochlore (of Hogarth 1977), Jixianite, Natrobistantite, Plumbomicrolite (of Hogarth 1977), Plumbobetafite (of Hogarth 1977), Stannomicrolite (of Hogarth 1977), Stibiobetafite (of Černý et al.), Yttrobetafite (of Hogarth 1977), Yttropyrochlore (of Hogarth 1977), Bismutopyrochlore (of Chukanov et al.) and Bismutostibiconite 04.DH.20
04.DJ With large (± medium-sized) cations; polyhedral frameworks: 05 Calciotantite, 05 Irtyshite, 05 Natrotantite
04.DK With large (± medium-sized) cations; tunnel structures: 05 Ankangite, 05 Coronadite, 05 Hollandite, 05 Manjiroite, 05 Mannardite, 05 Redledgeite, 05 Priderite, 05 Henrymeyerite, 05 Akaganeite, 10 Cryptomelane, 10 Romanechite, 10 Strontiomelane, 10 Todorokite
04.DL With large (± medium-sized) cations; fluorite-type structures: 05 Cerianite-(Ce), 05 Zirkelite, 05 Thorianite, 05 Uraninite; 10 Calzirtite, 10 Hiarneite, 10 Tazheranite
04.DM With large (± medium-sized) cations; unclassified: 05 Sosedkoite, 05 Rankamaite; 15 Cesplumtantite, 20 Eyselite, 25 Kuranakhite
04.E Metal:Oxygen = < 1:2
04.E: IMA2008-040
04.EA Oxides with metal : oxygen < 1:2 (M2O5, MO3): 05 Tantite, 10 Krasnogorite*, 10 Molybdite
04.X Unclassified Strunz Oxides
04.XX Unknown: 00 Allendeite, 00 Ashanite?, 00 Hongquiite*, 00 Psilomelane?, 00 Uhligite?, 00 Clinobirnessite*, 00 Kleberite*, 00 Chubutite*, 00 Struverite?, 00 IMA2000-016, 00 IMA2000-026
Class: hydroxides
04.F Hydroxides (without V or U)
04.FA Hydroxides with OH, without H2O; corner-sharing tetrahedra: 05a Behoite, 05b Clinobehoite; 10 Sweetite, 10 Wulfingite, 10 Ashoverite
04.FB Hydroxides with OH, without H2O; insular octahedra: 05 Shakhovite; 10 Cualstibite, 10 Zincalstibite
04.FC Hydroxides with OH, without H2O; corner-sharing octahedra: 05 Dzhalindite, 05 Sohngeite, 05 Bernalite; 10 Burtite, 10 Mushistonite, 10 Natanite, 10 Vismirnovite, 10 Schoenfliesite, 10 Wickmanite; 15 Jeanbandyite, 15 Mopungite, 15 Stottite; 15 Tetrawickmanite; 20 Ferronigerite-6N6S, 20 Ferronigerite-2N1S, 20 Magnesionigerite-6N6S, 20 Magnesionigerite-2N1S; 25 Magnesiotaaffeite-6N3S, 25 Magnesiotaaffeite-2N2S, 25 Ferrotaaffeite-6N3S
04.FD Hydroxides with OH, without H2O; chains of edge-sharing octahedra: 05 Spertiniite; 10 Bracewellite, 10 Diaspore, 10 Guyanaite, 10 Groutite, 10 Goethite, 10 Montroseite, 10 Tsumgallite; 15 Manganitev; 20 Cerotungstite-(Ce), 20 Yttrotungstite-(Y), 20 Yttrotungstite-(Ce); 25 Frankhawthorneite; 30 Khinite, 30 Parakhinite
04.FE Hydroxides with OH, without H2O; sheets of edge-sharing octahedra: 05 Amakinite, 05 Brucite, 05 Portlandite, 05 Pyrochroite, 05 Theophrastite, 05 Fougerite; 10 Bayerite, 10 Doyleite, 10 Gibbsite, 10 Nordstrandite; 15 Boehmite, 15 Lepidocrocite; 20 Grimaldiite, 20 Heterogenite-2H, 20 Heterogenite-3R; 25 Feitknechtite, 25 Lithiophorite; 30 Quenselite, 35 Ferrihydrite; 40 Feroxyhyte, 40 Vernadite; 45 Quetzalcoatlite
04.FF Hydroxides with OH, without H2O; various polyhedra: 05 Hydroromarchite
04.FG Hydroxides with OH, without H2O; unclassified: 05 Janggunite, 10 Cesarolite, 15 Kimrobinsonite
04.FH Hydroxides with H2O ± (OH); insular octahedra: 05 Bottinoite, 05 Brandholzite
04.FJ Hydroxides with H2O ± (OH); corner-sharing octahedra: 05 Sidwillite, 05 Meymacite; 10 Tungstite; 15 Ilsemannite, 15 Hydrotungstite; 20 Parabariomicrolite
04.FK Hydroxides with H2O ± (OH); chains of edge-sharing octahedra: 05 Bamfordite
04.FL Hydroxides with H2O ± (OH); sheets of edge-sharing octahedra: 05 Meixnerite, 05 Jamborite, 05 Iowaite, 05 Woodallite, 05 Akdalaite, 05 Muskoxite; 10 Hydrocalumite, 15 Kuzelite; 20 Aurorite, 20 Chalcophanite, 20 Ernienickelite, 20 Jianshuiite; 25 Woodruffite, 30 Asbolane; 40 Takanelite, 40 Rancieite; 45 Birnessite, 55 Cianciulliite, 60 Jensenite, 65 Leisingite, 75 Cafetite, 80 Mourite, 85 Deloryite
04.FM Hydroxides with H2O ± (OH); Unclassified: 15 Franconite, 15 Hochelagaite, 15 Ternovite; 25 Belyankinite, 25 Gerasimovskite, 25 Manganbelyankinite; 30 Silhydrite, 35 Cuzticite, 40 Cyanophyllite
04.FN: 05 Menezesite
04.G Uranyl Hydroxides
04.GA Without additional cations: IMA2008-022; 05 Metaschoepite, 05 Paraschoepite, 05 Schoepite; 10 Ianthinite; 15 Metastudtite, 15 Studtite
04.GB With additional cations (K, Ca, Ba, Pb, etc.); with mainly UO2(O,OH)5 pentagonal polyhedra: 05 Compreignacite, 05 Agrinierite, 05 Rameauite; 10 Billietite, 10 Becquerelite, 10 Protasite; 15 Richetite; 20 Calciouranoite, 20 Bauranoite, 20 Metacalciouranoite; 25 Fourmarierite, 30 Wolsendorfite, 35 Masuyite; 40 Metavandendriesscheite, 40 Vandendriesscheite; 45 Vandenbrandeite, 50 Sayrite, 55 Curite, 60 Iriginite, 65 Uranosphaerite, 70 Holfertite
04.GC With additional cations; with UO2(O,OH)6 hexagonal polyhedra: 05 Clarkeite, 10 Umohoite, 15 Spriggite
04.H V[5,6] Vanadates
(moved to -08- Class: vanadates)
04.I Ice group
04.X Unclassified Strunz Oxides (Hydroxides)
04.XX Unknown: 00 Ungursaite*, 00 Scheteligite?
| Physical sciences | Minerals | Earth science |
2577153 | https://en.wikipedia.org/wiki/Juniperus%20californica | Juniperus californica | Juniperus californica, the California juniper, is a species of juniper native to southwestern North America.
Description
Juniperus californica is a shrub or small tree reaching , but rarely up to tall. The bark is ashy gray, typically thin, and appears to be "shredded". The shoots are fairly thick compared to most junipers, between in diameter.
The foliage is bluish-gray and scale-like. The juvenile leaves (on the seedlings) are needle-like and long. Arranged in opposite decussate pairs or whorls of three, the adult leaves are scale-like, long on lead shoots and broad.
The cones are berrylike, in diameter, blue-brown with a whitish waxy bloom, turning reddish-brown, and contain a single seed (rarely two or three). The seeds are mature in about 8 or 9 months. The male cones are long and shed their pollen in early spring. This juniper is largely dioecious, producing cones of only one sex, but around 2% of plants are monoecious, with both sexes on the same plant.
The California juniper is closely related to the Utah juniper (J. osteosperma) from further east, which shares the stout shoots and relatively large cones, but differs in that Utah juniper is largely monoecious. Its cones take longer to mature (two growing seasons), and it is also markedly more cold-tolerant.
Distribution and habitat
As the name implies, it is mainly in numerous California habitats, although its range also extends through most of Baja California, a short distance into the Great Basin in southern Nevada, and into northwestern Arizona. In California it is found in: the Peninsular Ranges, Transverse Ranges, California Coast Ranges, Sacramento Valley foothills, Sierra Nevada, and at higher elevation sky islands in the Mojave Desert ranges. It is also found off of the North American continental shelf, on Guadalupe Island in the Pacific Ocean, where there are less than 10 individuals.
It grows at moderate altitudes of . Habitats include: pinyon–juniper woodland with single-leaf pinyon (Pinus monophylla); Joshua tree woodland; and foothill woodlands, in the montane chaparral and woodlands and interior chaparral and woodlands sub-ecoregions.
Conservation
The species is listed by the International Union for Conservation of Nature as least concern, and not considered globally threatened. However, one of the southernmost populations, formerly on Guadalupe Island off the Baja California Peninsula coast, was almost destroyed by feral goats in the late 19th century, with only a few plants remaining.
Ecology
J. californica provides food and shelter for a variety of native species, such as turkeys, deer, and many others. However, as the species matures, it becomes too tall to provide adequate food and shelter for deer and other ground animals of similar size. is a larval host for the native moth sequoia sphinx (Sphinx sequoiae).
Uses
The plant was used as a traditional Native American medicinal plant, and as a food source, by the indigenous peoples of California, including the Cahuilla people, Kumeyaay people (Diegueno), Serrano, and Ohlone people. They gathered the berries to eat fresh and to grind into meal for baking. The wood was also used for sinew-backed bows.
J. californica is cultivated as an ornamental plant, as a dense shrub (and eventual tree) for use in habitat gardens, heat and drought-tolerant gardens, and in natural landscaping design. It is very tolerant of alkali soil, and can provide erosion control on dry slopes. It is also a popular species for bonsai.
| Biology and health sciences | Cupressaceae | Plants |
772405 | https://en.wikipedia.org/wiki/Harvest | Harvest | Harvesting is the process of collecting plants, animals, or fish (as well as fungi) as food, especially the process of gathering mature crops, and "the harvest" also refers to the collected crops. Reaping is the cutting of grain or pulses for harvest, typically using a scythe, sickle, or reaper. On smaller farms with minimal mechanization, harvesting is the most labor-intensive activity of the growing season. On large mechanized farms, harvesting uses farm machinery, such as the combine harvester. Automation has increased the efficiency of both the seeding and harvesting processes. Specialized harvesting equipment, using conveyor belts for gentle gripping and mass transport, replaces the manual task of removing each seedling by hand. The term "harvesting" in general usage may include immediate postharvest handling, including cleaning, sorting, packing, and cooling.
The completion of harvesting marks the end of the growing season, or the growing cycle for a particular crop, and the social importance of this event makes it the focus of seasonal celebrations such as harvest festivals, found in many cultures and religions.
Etymology
"Harvest", a noun, came from the Old English word (coined before the Angles moved from Angeln to Britain) meaning "autumn" (the season), "harvest-time", or "August". (It continues to mean "autumn" in British dialect, and "season of gathering crops" generally.) "The harvest" came to also mean the activity of reaping, gathering, and storing grain and other grown products during the autumn season, and also the grain and other grown products themselves. "Harvest" was also verbified: "To harvest" means to reap, gather, and store the harvest (or the crop). People who harvest and equipment that harvests are harvesters; while they do it, they are harvesting.
Crop failure
Crop failure (also known as harvest failure) is an absent or greatly diminished crop yield relative to expectation, caused by the plants being damaged, killed, or destroyed, or affected in some way that they fail to form edible fruit, seeds, or leaves in their expected abundance.
Crop failures can be caused by catastrophic events such as plant disease outbreaks (such as the Great Famine in Ireland), volcanic eruptions (such as the Year Without a Summer), heavy rainfall, storms, floods, or drought, or by slow, cumulative effects of soil degradation, too-high soil salinity, erosion, desertification, usually as results of drainage, overdrafting (for irrigation), overfertilization, or overexploitation.
In history, crop failures and subsequent famines have triggered human migration, rural exodus, etc.
The proliferation of industrial monocultures, with their reduction in crop diversity and dependence on heavy use of artificial fertilizers and pesticides, has led to overexploited soils that are nearly incapable of regeneration. Over years, unsustainable farming of land degrades soil fertility and diminishes crop yield. With a steadily-increasing world population and local overpopulation, even slightly diminishing yields are already the equivalent to a partial harvest failure. Fertilizers obviate the need for soil regeneration in the first place, and international trade prevents local crop failures from developing into famines.
Other uses
Harvesting commonly refers to grain and produce, but also has other uses: fishing and logging are also referred to as harvesting. The term harvest is also used in reference to harvesting grapes for wine. Wild harvesting refers to the collection of plants and other edible supplies which have not been cultivated. Within the context of irrigation, water harvesting refers to the collection and run-off of rainwater for agricultural or domestic uses. Instead of harvest, the term exploit is also used, as in exploiting fisheries or water resources. Energy harvesting is the process of capturing and storing energy (such as solar power, thermal energy, wind energy, salinity gradients, and kinetic energy) that would otherwise go unexploited. Body harvesting, or cadaver harvesting, is the process of collecting and preparing cadavers for anatomical study. In a similar sense, organ harvesting is the removal of tissues or organs from a donor for purposes of transplanting.
In a non-agricultural sense, the word "harvesting" is an economic principle which is known as an exit event or liquidity event. For example, if a person or business was to cash out of an ownership position in a company or eliminate their investment in a product, it is known as a harvest strategy.
Canada
Harvesting or Domestic Harvesting in Canada refers to hunting, fishing, and plant gathering by First Nations, Métis, and Inuit in discussions of aboriginal or treaty rights. For example, in the Gwich'in Comprehensive Land Claim Agreement, "Harvesting means gathering, hunting, trapping or fishing...". Similarly, in the Tlicho Land Claim and Self Government Agreement, Harvesting' means, in relation to wildlife, hunting, trapping or fishing and, in relation to plants or trees, gathering or cutting."
Gallery
| Technology | Horticultural techniques | null |
773153 | https://en.wikipedia.org/wiki/Illuminance | Illuminance | In photometry, illuminance is the total luminous flux incident on a surface, per unit area. It is a measure of how much the incident light illuminates the surface, wavelength-weighted by the luminosity function to correlate with human brightness perception. Similarly, luminous emittance is the luminous flux per unit area emitted from a surface. Luminous emittance is also known as luminous exitance.
In SI units illuminance is measured in lux (lx), or equivalently in lumens per square metre (lm·m−2). Luminous exitance is measured in lm·m−2 only, not lux. In the CGS system, the unit of illuminance is the phot, which is equal to . The foot-candle is a non-metric unit of illuminance that is used in photography.
Illuminance was formerly often called brightness, but this leads to confusion with other uses of the word, such as to mean luminance. "Brightness" should never be used for quantitative description, but only for nonquantitative references to physiological sensations and perceptions of light.
The human eye is capable of seeing somewhat more than a 2 trillion-fold range. The presence of white objects is somewhat discernible under starlight, at (50 μlx), while at the bright end, it is possible to read large text at 108 lux (100 Mlx), or about 1000 times that of direct sunlight, although this can be very uncomfortable and cause long-lasting afterimages.
Common illuminance levels
Astronomy
In astronomy, the illuminance stars cast on the Earth's atmosphere is used as a measure of their brightness. The usual units are apparent magnitudes in the visible band. V-magnitudes can be converted to lux using the formula
where Ev is the illuminance in lux, and mv is the apparent magnitude. The reverse conversion is
Relation to luminance
The luminance of a reflecting surface is related to the illuminance it receives:
where the integral covers all the directions of emission , and
v is the surface's luminous exitance
v is the received illuminance, and
is the reflectance.
In the case of a perfectly diffuse reflector (also called a Lambertian reflector), the luminance is isotropic, per Lambert's cosine law. Then the relationship is simply
| Physical sciences | Optics | Physics |
773271 | https://en.wikipedia.org/wiki/AC%20power%20plugs%20and%20sockets | AC power plugs and sockets | AC power plugs and sockets connect devices to mains electricity to supply them with electrical power. A plug is the connector attached to an electrically-operated device, often via a cable. A socket (also known as a receptacle or outlet) is fixed in place, often on the internal walls of buildings, and is connected to an AC electrical circuit. Inserting ("plugging in") the plug into the socket allows the device to draw power from this circuit.
Plugs and wall-mounted sockets for portable appliances became available in the 1880s, to replace connections to light sockets. A proliferation of types were subsequently developed for both convenience and protection from electrical injury. Electrical plugs and sockets differ from one another in voltage and current rating, shape, size, and connector type. Different standard systems of plugs and sockets are used around the world, and many obsolete socket types are still found in older buildings.
Coordination of technical standards has allowed some types of plug to be used across large regions to facilitate the production and import of electrical appliances and for the convenience of travellers. Some multi-standard sockets allow use of several types of plug. Incompatible sockets and plugs may be used with the help of adaptors, though these may not always provide full safety and performance.
Overview of connections
Single-phase sockets have two current-carrying connections to the power supply circuit, and may also have a third pin for a safety connection to earth ground. The plug is a male connector, usually with protruding pins that match the openings and female contacts in a socket. Some plugs also have a female contact, used only for the earth ground connection. Typically no energy is supplied to any exposed pins or terminals on the socket. In addition to the recessed contacts of the energised socket, plug and socket systems often have other safety features to reduce the risk of electric shock or damage to appliances.
History
When commercial electric power was first introduced in the 1880s, it was used primarily for lighting. Other portable appliances (such as vacuum cleaners, electric fans, smoothing irons, and curling-tong heaters) were connected to light-bulb sockets.
As early as 1885 a two-pin plug and wall socket format was available on the British market. By about 1910 the first three-pin earthed (grounded) plugs appeared. Over time other safety improvements were gradually introduced to the market. The earliest national standard for plug and wall socket forms was set in 1915.
Safety features
Protection from accidental contact
Designs of plugs and sockets have gradually developed to reduce the risk of electric shock and fire. Plugs are shaped to prevent bodily contact with live parts. Sockets may be recessed and plugs designed to fit closely within the recess to reduce risk of a user contacting the live pins. Contact pins may be sheathed with insulation over part of their length, so as to reduce exposure of energized metal during insertion or removal of the plug. Sockets may have automatic shutters to stop foreign objects from being inserted into energized contacts. Sockets are often set into a surround which prevents accidental contact with the live wires in the wall behind it. Some also have an integrated cover (e.g. a hinged flap) covering the socket itself when not in use, or a switch to turn off the socket.
Overcurrent protection
Some plugs have a built-in fuse which breaks the circuit if too much current is passed.
Earthing (grounding)
A third contact for a connection to earth is intended to protect against insulation failure of the connected device. Some early unearthed plug and socket types were revised to include an earthing pin or phased out in favour of earthed types. The plug is often designed so that the earth ground contact connects before the energized circuit contacts.
The assigned IEC appliance class is governed by the requirement for earthing or equivalent protection. Class I equipment requires an earth contact in the plug and socket, while Class II equipment is unearthed and protects the user with double insulation.
Polarisation
Where a "neutral" conductor exists in supply wiring, polarisation of the plug can improve safety by preserving the distinction in the equipment. For example, appliances may ensure that switches interrupt the line side of the circuit, or can connect the shell of a screw-base lampholder to neutral to reduce electric shock hazard. In some designs, polarised plugs cannot be mated with non-polarised sockets. In NEMA 1 plugs, for example, the neutral blade is slightly wider than the hot blade, so it can only be inserted one way. Wiring systems where both circuit conductors have a significant potential with respect to earth do not benefit from polarised plugs.
Voltage rating of plugs and power cords
Plugs and power cords have a rated voltage and current assigned to them by the manufacturer. Using a plug or power cord that is inappropriate for the load may be a safety hazard. For example, high-current equipment can cause a fire when plugged into an extension cord with a current rating lower than necessary. Sometimes the cords used to plug in dual voltage 120 V / 240 V equipment are rated only for 125 V, so care must be taken by travellers to use only cords with an appropriate voltage rating.
Extension
Various methods can be used to increase the number or reach of sockets.
Extension cords
Extension cords (extension leads) are used for temporary connections when a socket is not within convenient reach of an appliance's power lead. This may be in the form of a single socket on a flexible cable or a power strip with multiple sockets. A power strip may also have switches, surge voltage protection, or overcurrent protection.
Multisocket adaptors
Multisocket adaptors (or "splitters") allow the connection of two or more plugs to a single socket. They are manufactured in various configurations, depending on the country and the region in which they are used, with various ratings. This allows connecting more than one electrical consumer item to one single socket and is mainly used for low power devices (TV sets, table lamps, computers, etc.). They are usually rated at 6 A 250 V, 10 A 250 V, or 16 A 250 V. This is the general rating of the adaptor, and indicates the maximum total load in amps, regardless of the number of sockets used (for example, if a 16 A 250 V adaptor has four sockets, it would be fine to plug four different devices into it that each consume 2 A as this represents a total load of only 8 A, whereas if only two devices were plugged into it that each consumed 10 A, the combined 20 A load would overload the circuit). In some countries these adaptors are banned and are not available in shops, as they may lead to fires due to overloading them or can cause excessive mechanical stress to wall-mounted sockets. Adaptors can be made with ceramic, Bakelite, or other plastic bodies.
Cross-compatibility
Universal sockets
"Universal" or "multi-standard" sockets are intended to accommodate plugs of various types. In some jurisdictions, they violate safety standards for sockets.
Safety advocates, the United States Army, and a manufacturer of sockets point out a number of safety issues with universal socket and adaptors, including voltage mismatch, exposure of live pins, lack of proper earth ground connection, or lack of protection from overload or short circuit. Universal sockets may not meet technical standards for durability, plug retention force, temperature rise of components, or other performance requirements, as they are outside the scope of national and international technical standards.
A technical standard may include compatibility of a socket with more than one form of plug. The Thai dual socket is specified in figure 4 of TIS 166-2549 and is designed to accept Thai plugs, and also Type A, B, and C plugs. Chinese dual sockets have both an unearthed socket complying with figure 5 of GB 1002-2008 (both flat pin and 4.8 mm round pin), and an earthed socket complying with figure 4 of GB 1002-2008. Both Thai and Chinese dual sockets also physically accept plugs normally fitted to 120 V appliances (e.g. 120 V rated NEMA 1-15 ungrounded plugs). This can cause an electrical incompatibility, since both states normally supply residential power only at 220 V.
Swappable cables and plugs
Commonly, manufacturers provide an IEC 60320 inlet on an appliance, with a detachable power cord (mains flex lead) and appropriate plug in order to avoid manufacturing whole appliances, with the only difference being the type of plug. Alternatively, the plug itself can often be swappable using standard or proprietary connectors.
Travel adaptors
Adaptors between standards are not included in most standards, and as a result they have no formal quality criteria defined. Physical compatibility does not ensure that the appliance and socket match in frequency or voltage. Adaptors allow travellers to connect devices to foreign sockets, but do not change voltage or frequency. A voltage converter is required for electrical compatibility in places with a different voltage than the device is designed for. Mismatch in frequency between supply and appliances may still cause problems even at the correct voltage. Some appliances have a switch for the selection of voltage.
Standard types in present use
The plugs and sockets used in a given area are regulated by local governments.
The International Electrotechnical Commission (IEC) maintains a guide with letter designations for generally compatible types of plugs, which expands on earlier guides published by the United States Department of Commerce. This is a de facto naming standard and guide to travellers. Some letter types correspond to several current ratings or different technical standards, so the letter does not uniquely identify a plug and socket within the type family, nor guarantee compatibility. Physical compatibility of the plug and socket does not ensure correct voltage, frequency, or current capacity. Not all plug and socket families have letters in the IEC guide, but those that have are noted in this article, as are some additional letters commonly used by retail vendors.
In Europe, CENELEC publishes a list of approved plug and socket technical standards used in the member countries.
Argentina IRAM 2073 and 2071 (Type I)
The plug and socket system used in Class 1 applications in Argentina is defined by IRAM standards. These two standards are; IRAM 2073 "Two pole plugs with earthing contact for domestic and similar purposes, rated 10 A and 20 A, 250 V AC" and IRAM 2071 "Two pole socket – outlets with earthing contact for 10 A and 20 A, 250 V AC., for fixed installations." The plug and socket system is similar in appearance to the Australian and Chinese plugs. It has an earthing pin and two flat current-carrying pins forming an inverted V-shape (120°). The flat pins for the 10 A version measure and for the 20 A version, and are set at 30° to the vertical at a nominal pitch of . The pin length is the same as in the Chinese version. The earthing pin length is for the 10 A version and for the 20 A version. On the plugs, the pole length is for the 10 A version and for the 20 A version.
The most important difference from the Australian plug is that the Argentine plug is wired with the live and neutral contacts reversed.
In Brazil, similar plugs and sockets are still commonly used in old installations for high-power appliances like air conditioners, dishwashers, and household ovens. Although being often called "Argentine plug," it is actually based on the American NEMA 10-20 standard, and is incompatible with Argentine IRAM plugs. Since Brazil adopted the NBR 14136 standard which includes a 20 A version, the original motivation to use the NEMA 10-20 plug has ceased to exist.
Australian/New Zealand standard AS/NZS 3112 (Type I), used in Australasia
This Australian/New Zealand standard is used in Australia, New Zealand, Fiji, Tonga, Solomon Islands, and Papua New Guinea. It defines a plug with an earthing pin, and two flat current-carrying pins which form an inverted V-shape. The flat pins measure and are set at 30° to the vertical at a nominal pitch of . Australian and New Zealand wall sockets (locally often referred to as power points) almost always have switches on them for extra safety, as in the UK. An unearthed version of this plug with two angled power pins but no earthing pin is used with double-insulated appliances, but the sockets always include an earth contact.
There are several AS/NZS 3112 plug variants, including ones with larger or differently shaped pins used for devices drawing 15, 20, 25 and 32 A. These sockets accept plugs of equal or lower current rating, but not higher. For example, a 10 A plug will fit all sockets but a 20 A plug will fit only 20, 25 and 32 A sockets. In New Zealand, PDL 940 "tap-on" or "piggy-back" plugs are available which allow a second 10 A plug to be fitted to the rear of the plug. In Australia these piggy-back plugs are now available only on pre-made extension leads.
Australia's standard plug/socket system was originally codified as standard C112 (floated provisionally in 1937, and adopted as a formal standard in 1938), which was based on a design patented by Harvey Hubbell and was superseded by AS 3112 in 1990. The requirement for insulated pins was introduced in the 2004 revision. The current version is AS/NZS 3112:2011, Approval and test specification – Plugs and socket-outlets.
Brazilian standard NBR 14136 (Type N)
Brazil, which had been using mostly Europlugs, and NEMA 1-15 and NEMA 5-15 standards, adopted a (non-compliant) variant of IEC 60906-1 as the national standard in 1998 under specification NBR 14136 (revised in 2002). These are used for both 220-volt and 127-volt regions of the country, despite the IEC 60906-2 recommendation that NEMA 5-15 be used for 120 V connections. There are two types of sockets and plugs in NBR 14136: one for 10 A, with a 4.0 mm pin diameter, and another for 20 A, with a 4.8 mm pin diameter. This differs from IEC 60906-1 which specifies a pin diameter of 4.5 mm and a rating of 16 A. NBR 14136 does not require shutters on the apertures, a further aspect of non-compliance with IEC 60906-1. NBR 14136 was not enforced in that country until 2007, when its adoption was made optional for manufacturers. It became compulsory on 1 January 2010.
Few private houses in Brazil have an earthed supply, so even if a three-pin socket is present it is not safe to assume that all three terminals are actually connected. Most large domestic appliances were sold with the option to fit a flying earth tail to be locally earthed, but many consumers were unsure how to use this and so did not connect it. The new standard has an earth pin, which in theory eliminates the need for the flying earth tail.
British and compatible standards
BS 546 and related types (Type D and M)
BS 546, "Two-pole and earthing-pin plugs, socket-outlets and socket-outlet adaptors for AC (50-60 Hz) circuits up to 250 V" describes four sizes of plug rated at 2 A, 5 A (Type D), 15 A (Type M) and 30 A. The plugs have three round pins arranged in a triangle, with the larger top pin being the earthing pin. The plugs are polarised and unfused. Plugs are non-interchangeable between current ratings. Introduced in 1934, the BS 546 type has mostly been displaced in the UK by the BS 1363 standard. According to the IEC, some 40 countries use Type D and 15 countries use Type M. Some, such as India and South Africa, use standards based on BS 546.
BS 1363 (Type G)
BS 1363 "13 A plugs, socket-outlets, adaptors and connection units" is the main plug and socket type used in the United Kingdom. According to the IEC it is also used in over 50 countries worldwide. Some of these countries have national standards based on BS 1363, including: Bahrain, Hong Kong, Ireland, Cyprus, Malaysia, Malta, Saudi Arabia, Singapore, Sri Lanka, and UAE.
This plug has three rectangular pins forming an isosceles triangle. The BS 1363 plug has a fuse rated to protect its flexible cord from overload and consequent fire risk. Modern appliances may only be sold with a fuse of the appropriate size pre-installed.
BS 4573 (UK shaver)
The United Kingdom, Ireland, and Malta use the BS 4573 two-pin plug and socket for electric shavers and toothbrushes. The plug has insulated sleeves on the pins. Although similar to the Europlug Type C, the diameter and spacing of the pins are slightly different and hence it will not fit into a Schuko socket. There are, however, two-pin sockets and adaptors which will accept both BS 4573 and Europlugs.
CEE 7 standard
The International Commission on the Rules for the Approval of Electrical Equipment (IECEE) was a standards body which published Specification for plugs and socket-outlets for domestic and similar purposes as CEE Publication 7 in 1951. It was last updated by Modification 4 in March 1983. CEE 7 consists of general specifications and standard sheets for specific connectors.
Standard plugs and sockets based on two round pins with centres spaced at 19 mm are in use in Europe, most of which are listed in IEC/TR 60083 "Plugs and socket-outlets for domestic and similar general use standardized in member countries of IEC." EU countries each have their own regulations and national standards; for example, some require child-resistant shutters, while others do not. CE marking is neither applicable nor permitted on plugs and sockets.
CEE 7/1 unearthed socket and CEE 7/2 unearthed plug
CEE 7/1 unearthed sockets accept CEE 7/2 round plugs with pins. Because they have no earth connections they have been or are being phased out in most countries. Some countries still permit their use in dry areas, while others allow their sale for replacements only. Older sockets are so shallow that it is possible to accidentally touch the live pins of a plug. CEE 7/1 sockets also accept CEE 7/4, CEE 7/6 and CEE 7/7 plugs without providing an earth connection. The earthed CEE 7/3 and CEE 7/5 sockets do not allow insertion of CEE 7/2 unearthed round plugs.
CEE 7/3 socket and CEE 7/4 plug (German "Schuko"; Type F)
The CEE 7/3 socket and CEE 7/4 plug are commonly called Schuko, an abbreviation for Schutzkontakt, Protective contact to earth ("Schuko" itself is a registered trademark of a German association established to own the term). The socket has a circular recess with two round holes and two earthing clips that engage before live pin contact is made. The pins are . The Schuko system is unpolarised, allowing live and neutral to be reversed. The socket accepts Europlugs and CEE 7/17 plugs and also includes CEE 7/7. It is rated at 16 A. The current German standards are DIN 49441 and DIN 49440. The standard is used in Germany and several other European countries and on other continents. Some countries require child-proof socket shutters; the DIN 49440 standard does not have this requirement.
The plug is used in most or many countries of Europe, Asia, and Africa, as well as in the countries of South Korea, Peru, Chile and Uruguay. The few European countries not using it at all are Belgium, Czech Republic, Cyprus, Ireland, Liechtenstein, Switzerland, and the UK, or not using it predominantly are Denmark, Faroe Island, France, Italy, Monaco, San Marino, Slovakia.
CEE 7/5 socket and CEE 7/6 plug (French; Type E)
French standard NF C 61-314 defines the CEE 7/5 socket and CEE 7/6 plug, (and also includes CEE 7/7, 7/16 and 7/17 plugs). The socket has a circular recess with two round holes. The round earth pin projecting from the socket connects before the energized contacts touch. The earth pin is centred between the apertures, offset by . The plug has two round pins measuring , spaced apart and with an aperture for the socket's projecting earth pin. This standard is also used in Belgium, Poland, the Czech Republic, Slovakia and some other countries.
Although the plug is polarised, CEE 7 does not define the placement of the live and neutral, and different countries have conflicting standards for that. For example, the French standard NF C 15-100 requires live to be on the right side, while Czech standard ČSN 33 2180 requires live to be on the left side of a socket. Thus, a French plug when plugged into a Czech socket (or a Czech plug when plugged into a French socket) will always have its polarity reversed, with no way for the user to remedy this situation apart from rewiring the plug. One approach for resolving this situation is taken in Poland, where CEE 7/5 sockets are typically installed in pairs, the upper (upside-down) one having the "French" polarity and the lower one having the "Czech" polarity, so that the user can choose what to plug where.
CEE 7/4 (Schuko) plugs are not compatible with the CEE 7/5 socket because of the round earthing pin permanently mounted in the socket; CEE 7/6 plugs are not compatible with Schuko sockets due to the presence of indentations on the side of the recess, as well as the earth clips. CEE 7/7 plugs have been designed to solve this incompatibility by being able to fit in either type of socket.
Sales and installations of 7/5 sockets are legally permitted in Denmark since 2008, but the sockets are hard to find in physical stores, and installation is exceedingly rarely performed.
CEE 7/7 plug (compatible with E and F)
The CEE 7/7 plug fits in either French or Schuko sockets. It is rated at 16A and looks similar to CEE 7/4 plugs, but with earth contacts to fit both CEE 7/5 sockets and CEE 7/3 ones. It is polarised when used with a French-style CEE 7/5 socket, but can be inserted in two ways into a CEE 7/3 socket. However, with the French socket it is not specified whether the live connection is on the left or right, as this can vary between countries.
Earthed appliances are typically sold fitted with non-rewireable CEE 7/7 plugs attached, though rewireable versions are also available. This plug can be inserted into a Danish Type K socket, but the earth contact will not connect.
CEE 7/16 plugs
The CEE 7/16 unearthed plug is used for unearthed appliances. It has two round 4 by 19 mm (0.157 by 0.748 in) pins, rated at 2.5 A. There are two variants.
CEE 7/16 Alternative I
Alternative I is a round plug with cutouts to make it compatible with CEE 7/3 and CEE 7/5 sockets. (The similar-appearing CEE 7/17 has larger pins and a higher current rating.) This alternative is seldom used.
CEE 7/16 Alternative II "Europlug" (Type C)
Alternative II, popularly known as the Europlug, is a flat 2.5 A-rated plug defined by Cenelec standard EN 50075 and national equivalents. The Europlug is not rewirable and must be supplied with a flexible cord. It can be inserted in either direction, so line and neutral are connected arbitrarily. To improve contact with socket parts the Europlug has slightly flexible pins which converge toward their free ends.
There is no socket defined to accept only the Europlug. Instead, the Europlug fits a range of sockets in common use in Europe. These sockets include CEE 7/1, CEE 7/3 (German/"Schuko") and CEE 7/5 (French). Most Israeli, Swiss, Danish and Italian sockets were designed to accept pins of various diameters, mainly 4.8 mm, but also 4.0 mm and 4.5 mm, and are usually fed by final circuits with either 10 A or 16 A overcurrent protection devices.
Although the standard does not permit extension cables and does not define any socket-outlets, unauthorized extension cables and sockets are manufactured.
UK shaver sockets are designed to accept BS 4573 shaver plugs while also accepting Europlugs. In this configuration, the connection supply is only rated at 200 mA. It is not permissible within the UK for the shaver socket to be fitted and used for a higher rated current draw than the 200 mA maximum.
The Europlug is also used in parts of the Middle East, Africa, South America, and Asia.
CEE 7/17 unearthed plug
This is a round plug compatible with CEE 7/1, CEE 7/3, and CEE 7/5 sockets. It has two round pins measuring . The pins are not sheathed, in contrast to e.g. CEE 7/16 Europlugs. It may be rated at either 10 A or 16 A. A typical use is for appliances that exceed the 2.5 A rating of CEE 7/16 Europlugs. It may be used for unearthed Class II appliances (and in South Korea for all domestic non-earthed appliances). It is also defined as the Class II plug in Italian standard CEI 23-50.
It is sometimes called a contour plug, because its collar contour follows that of the socket's recess. The collar prevents accidental contact with the non sheathed pins when inserting or removing the plug in a recessed socket.
It can be inserted into Israeli SI 32 outlets with some difficulty, as well as Danish (type K) ones. The Soviet GOST 7396 standard includes both the CEE 7/17 and the CEE 7/16 variant II plug.
China GB 2099.1-2008 and GB 1002-2008 (Type A & I)
The standard for Mainland Chinese plugs and sockets (excluding Hong Kong and Macau) is set out in GB 2099.1-2008 and GB 1002-2008. As part of China's commitment for entry into the WTO, the new CPCS (Compulsory Product Certification System) has been introduced, and compliant Chinese plugs have been awarded the CCC Mark by this system. The plug is three wire, earthed, rated at 10 A, 250 V and used for Class 1 applications; a slightly larger 16 A version also exists. The nominal pin dimensions of the 10 A version are: 1.5 mm thick by 6.4 mm wide, the line & neutral are 18 mm long, and the earth is 21 mm long. It is similar to the Australian plug. Many 3 pin sockets in China include a physical lockout preventing access to the active and neutral terminals unless an earth pin (which is slightly longer than the other 2 pins) is entered first. China also uses American/Japanese NEMA 1-15 sockets and plugs for Class-II appliances (however, polarized plugs with one prong wider than the other are not accepted); a common socket type that also accepts Europlug (type C) is also defined in GB 1002. The voltage at a Chinese socket of any type is 220 V.
Type I plugs and sockets from different countries have different pin lengths. This means that the uninsulated pins of a Chinese plug may become live while there is still a large enough gap between the faces of the plug and socket to allow a finger to touch the pin.
Danish Section 107-2-D1 earthed (Type K)
This Danish standard plug is described in the Danish Plug Equipment Section 107-2-D1 Standard sheet (SRAF1962/DB 16/87 DN10A-R). The Danish standard provides for sockets to have child-resistant shutters.
The Danish socket will also accept the CEE 7/16 Europlug or CEE 7/17 Schuko-French hybrid plug. CEE 7/4 (Schuko), CEE 7/7 (Schuko-French hybrid), and earthed CEE 7/6 French plugs will also fit into the socket but will not provide an earth connection and may be attached to appliances requiring more than the 13 A maximum rating of the socket.
A variation (standard DK 2-5a) of the Danish plug is for use only on surge protected computer sockets. It fits into the corresponding computer socket and the normal socket, but normal plugs deliberately do not fit into the special computer socket. The plug is often used in companies, but rarely in private homes.
There is a variation for hospital equipment with a rectangular left pin, which is used for life support equipment.
Traditionally all Danish sockets were equipped with a switch to prevent touching live pins when connecting/disconnecting the plug. Today, sockets without switch are allowed, but it is a requirement that the sockets have a cavity to prevent touching the live pins. The shape of the plugs generally makes it difficult to touch the pins when connecting/disconnecting.
Since the early 1990s earthed sockets have been required in all new electric installations in Denmark. Older sockets need not be earthed, but all sockets, including old installations, must be protected by earth-fault interrupters (HFI or HPFI in Danish) by 1 July 2008.
As of 1 July 2008, wall sockets for French two-pin, female earth CEE 7/5 are permitted for installations in Denmark. This was done because little electrical equipment sold to private users is equipped with a Danish plug. In Europe, devices are usually sold with the Europlug CEE 7/16 and Hybrid plug CEE 7/7, as these fit in most countries. However, in Denmark this often leads to the situation that the protective earth is not connected.
CEE 7/3 sockets were not permitted until 15 November 2011. Many international travel adaptor sets sold outside Denmark match CEE 7/16 (Europlug) and CEE 7/7 (Schuko-French hybrid) plugs which can readily be used in Denmark.
Though Type K remains by far the most common socket in Danish homes as of January 2024, newssites and industry magazines have warned that plugging a Schuko plug directly into a Type K socket can give noticeable electric shocks to the point of pain, be dangerous to the point of hospitalising, or even be life-threatening.
IEC 60906-1 (Type N)
In 1986, the International Electrotechnical Commission published IEC 60906-1, a specification for a plug and socket that look similar, but are not identical, to the Swiss plug and socket. This standard was intended to one day become common for all of Europe and other regions with 230 V mains, but the effort to adopt it as a European Union standard was put on hold in the mid-1990s.
The plug and socket are rated 16 A 250 V AC and are intended for use only on systems having nominal voltages between 200 V and 250 V AC The plug pins are 4.5 mm in diameter, line and neutral are on centres 19 mm apart. The earth pin is offset by 3.0 mm. The line pin is on the right when looking at a socket with the earth pin offset up. Shutters over the line and neutral pins are mandatory.
The only country to have officially adopted the standard is South Africa as SANS 164-2.
Brazil developed a plug resembling IEC 60906-1 as the national standard under specification NBR 14136. The NBR 14136 standard has two versions, neither of which has pin dimensions or ratings complying with IEC 60906-1. Use at 127 V is permitted by NBR 14136, which is against the intention of IEC 60906-1.
Israel SI32 (Type H)
The plug defined in SI 32 (IS16A-R) is used only in Israel, including the Gaza Strip and the West Bank. There are two versions: an older one with flat pins, and a newer one with round pins.
The pre-1989 system has three flat pins in a Y-shape, with line and neutral apart. The plug is rated at 16 A. In 1989 the standard was revised, with three round pins in the same locations designed to allow the socket to accept both older and newer Israeli plugs, and also non-grounded Europlugs (often used in Israel for equipment which does not need to be grounded and does not use more current than the Europlug is rated for). Pre-1989 sockets which accept only old-style plugs have become very rare in Israel.
Sockets have a defined polarity; looking at the front, neutral is to the left, ground at the bottom, and line to the right.
Italy (Type L)
Italian plugs and sockets are defined by the standard CEI 23-50 which superseded CEI 23-16. This includes models rated at 10 A and 16 A that differ in contact diameter and spacing (see below for details). Both are symmetrical, allowing the line and neutral contacts to be inserted in either direction. This plug is also commonly used in Chile and Uruguay.
10 A plugs and socket: Pins which are 4 mm in diameter, the centres spaced 19 mm apart. The 10 A three-pin earthed rear entry plug is designated CEI 23-50 S 11 (there are also two side-entry versions, SPA 11 and SPB 11). The 10 A two-pin unearthed plug is designated CEI 23-50 S 10. The 10 A three-pin earthed socket is designated CEI 23-50 P 11, and the 10 A two-pin unearthed socket is designated CEI 23-50 P 10. Both 10 A sockets also accept CEE 7/16 (Europlugs).
16 A plug and socket: Pins which are 5 mm in diameter, the centres spaced 26 mm apart. The 16 A three-pin earthed rear entry plug is designated CEI 23-50 S 17 (there are also two side-entry versions, SPA 17 and SPB 17). The 16 A two-pin unearthed plug is designated CEI 23-50 S 16. The 16 A three-pin earthed socket is designated CEI 23-50 P 17, there is not a 16 A two-pin unearthed socket. The 16 A socket used to be referred to as per la forza motrice (for electromotive force, see above) or sometimes (inappropriately) industriale (industrial) or even calore (heat).
The two standards were initially adopted because up to the second half of the 20th century in many regions of Italy electricity was supplied by means of two separate consumer connections – one for powering illumination and one for other purposes – and these generally operated at different voltages, typically 127 V (a single phase from 220 V three-phase) and 220 V (a single phase from three-phase 380 V or two-phase from 220 V three-phase). The electricity on the two supplies was separately metered, was sold at different tariffs, was taxed differently and was supplied through separate and different sockets. Even though the two electric lines (and respective tariffs) were gradually unified beginning in the 1960s (the official, but purely theoretical, date was the summer of 1974) many houses had dual wiring and two electricity meters for years thereafter; in some zones of Lazio the 127 V network was provided for lighting until 1999. The two gauges for plugs and sockets thus became a de facto standard which is now formalized under CEI 23-50. Some older installations have sockets that are limited to either the 10 A or the 16 A style plug, requiring the use of an adaptor if the other gauge needs to be connected. Numerous cross adaptors were used.
Almost every appliance sold in Italy nowadays is equipped with CEE 7/7 (German/French), CEE 7/16 or CEE 7/17 plugs, but the standard Italian sockets will not accept the first and the third ones since the pins of the CEE 7/7 and CEE 7/17 plugs are thicker (4.8 mm) than the Italian ones (4 mm); besides the pins are not sheathed and forcing them into a linear Italian socket may lead to electric shock. Adaptors are standardized in Italy under CEI 23-57 which can be used to connect CEE 7/7 and CEE 7/17 and plugs to linear CEI 23-50 sockets.
Europlugs are also in common use in Italy; they are standardized under CEI 23-34 S 1 for use with the 10 A socket and can be found fitted to Class II appliances with low current requirement (less than 2.5 A).
The current Italian standards provide for sockets to have child-resistant shutters ("Sicury" patent).
Italian multiple standard sockets
In modern installations in Italy (and in other countries where Type L plugs are used) it is usual to find sockets that can accept more than one standard.
The simplest type, designated CEI 23-50 P 17/11, has a central round hole flanked by two figure-8 shaped holes, allowing the insertion of CEI 23-50 S 10 (Italian 10 A plug unearthed), CEI 23-50 S 11 (Italian 10 A plug earthed), CEI 23-50 S 16 (Italian 16 A plug unearthed), CEI 23-50 S 17 (Italian 16 A plug earthed) and CEE 7/16 (Europlug). The advantage of this socket style is its small, compact face; its drawback is that it accepts neither CEE 7/7 nor CEE 7/17, very commonly found in new appliances sold in Italy. Vimar brand claims to have patented this socket first in 1975 with their Bpresa model; however soon other brands started selling similar products, mostly naming them with the generic term presa bipasso (twin-gauge socket) that is now of common use.
A second, quite common type is called CEI 23-50 P 30 and looks like a Schuko socket, but adds a central earthing hole (optional according to CEI 23-50, but virtually always present). This design can accept CEE 7/4 (German), CEE 7/7 (German/French), CEE 7/16, CEE 7/17 (Konturenstecker, German/French unearthed), CEI 23-50 S 10 and CEI 23-50 S 11 plugs. Its drawback is that it is twice as large as a normal Italian socket, it does not accept 16 A Italian plugs and the price is higher; for those reasons Schuko sockets have been rarely installed in Italy until recent times.
Other types may push compatibility even further. The CEI 23-50 P 40 socket, which is quickly becoming the standard in Italy along with CEI 23-50 P 17/11, accepts CEE 7/4, CEE 7/7, CEE 7/16, CEE 7/17, CEI 23-50 S 10, CEI 23-50 S 11, CEI 23-50 S 16 and CEI 23-50 S 17 plugs; its drawback is that it does not accept SPA 11, SPB 11, SPA 17 and SPB 17 side-entry plugs; however almost no appliance is sold with these types, which are mainly used to replace existing plugs. The Vimar-brand universale (all purpose) socket accepts CEE 7/4, CEE 7/7, CEE 7/16, CEE 7/17, CEI 23-50 S 10, CEI 23-50 S 11, CEI 23-50 S 16, CEI 23-50 S 17 and also NEMA 1-15 (US/Japan) plugs (older versions also had extra holes to accept UK shaver plugs).
North America, Central America and IEC 60906-2
Most of North America and Central America, and some of South America, use connectors standardized by the National Electrical Manufacturers Association (NEMA). The devices are named using the format NEMA n-mmX, where n is an identifier for the configuration of pins and blades, mm is the maximum current rating, and X is either P for plug or R for receptacle. For example, NEMA 5-15R is a configuration type 5 receptacle supporting 15 A. Corresponding P and R versions are designed to be mated. Within the series, the arrangement and size of pins will differ, to prevent accidental mating of devices with a higher current draw than the receptacle can support.
NEMA 1-15 ungrounded (Type A)
NEMA-1 plugs have two parallel blades and are rated 15 A at 125 volts. They provide no ground connection but will fit a grounding NEMA 5-15 receptacle. Early versions were not polarised, but most plugs are polarised today via a wider neutral blade. (Unpolarised AC adaptors are a common exception.)
Harvey Hubbell patented a parallel blade plug in 1913, where the blades were equal width (). In 1916 Hubbell received a patent for a polarised version where one blade was both longer and wider than the other (), in the polarised version of NEMA 1-15, introduced in the 1950s, both blades are the same length, only the width varies.
Ungrounded NEMA-1 outlets are not permitted in new construction in the United States and Canada, but can still be found in older buildings.
NEMA 5-15 grounded (Type B)
The NEMA 5-15 plug has two flat parallel blades like NEMA 1-15, and a ground (earth) pin. It is rated 15 A at 125 volts. The ground pin is longer than the line and neutral blades, such that an inserted plug connects to ground before power. The ground hole is officially D-shaped, although some round holes exist. Both current-carrying blades on grounding plugs are normally narrow, since the ground pin enforces polarity. This socket is recommended in IEC standard 60906-2 for 120-volt 60 Hz installations.
The National Electrical Contractors Association's National Electrical Installation Standards (NECA 130-2010) recommends that sockets be mounted with the ground hole up, such that an object falling on a partially inserted connector contacts the ground pin first. However, the inverted orientation (with ground pin downwards) is more commonly used. The ground-down orientation has been called the "sad socket", "dismayed face", or "shocked face" by some.
Tamper-resistant sockets may be required in new residential construction, with shutters on the power blade sockets to prevent contact by objects inserted into the socket.
In stage lighting, this connector is sometimes known as PBG for Parallel Blade with Ground, Edison or Hubbell, the name of a common manufacturer.
NEMA 5-20
The NEMA 5-20 AP variant has blades perpendicular to each other. The receptacle has a T-slot for the neutral blade which accepts either 15 A parallel-blade plugs or 20 A plugs.
NEMA 14-50
NEMA 14-50 devices are frequently found in RV parks, since they are used for "shore power" connections of larger recreational vehicles. Also, it was formerly common to connect mobile homes to utility power via a 14-50 device. Newer applications include Tesla's Mobile Connector for vehicle charging, which formally recommended the installation of a 14-50 receptacle for home use.
Other NEMA types
30- and 50-amp rated sockets are often used for high-current appliances such as clothes dryers and electric stoves.
JIS C 8303, Class II unearthed
The Japanese Class II plug and socket appear physically identical to NEMA 1-15 and also carries 15 A. The relevant Japanese Industrial Standard, JIS C 8303, imposes stricter dimensional requirements for the plug housing, different marking requirements, and mandatory testing and type approval.
Older Japanese sockets and multi-plug adaptors are unpolarised—the slots in the sockets are the same size—and will accept only unpolarised plugs. Japanese plugs generally fit into most North American sockets without modification, but polarised North American plugs may require adaptors or replacement non-polarised plugs to connect to older Japanese sockets. In Japan the voltage is 100 V, and the frequency is either 50 Hz (Eastern Japan: Tokyo, Yokohama, Tohoku, Kawasaki, Sapporo, Sendai and Hokkaido) or 60 Hz (Western Japan: Osaka, Kyoto, Nagoya, Shikoku, Kyushu and Hiroshima) depending on whether the customer is located on the Osaka or Tokyo grid. Therefore, some North American devices which can be physically plugged into Japanese sockets may not function properly.
JIS C 8303, Class I earthed
Japan also uses a grounded plug similar to the North American NEMA 5-15. However, it is less common than its NEMA 1-15 equivalent. Since 2005, new Japanese homes are required to have class I grounded sockets for connecting domestic appliances. This rule does not apply for sockets not intended to be used for domestic appliances, but it is strongly advised to have class I sockets throughout the home.
Soviet standard GOST 7396 C 1 unearthed
This Soviet plug, still sometimes used in the region, has pin dimensions and spacing equal to the Europlug, but lacks the insulation sleeves. Unlike the Europlug, it is rated 6 A. It has a round body like the European CEE 7/2 or flat body with a round base like CEE 7/17. The round base has no notches. The pins are parallel and do not converge. The body is made of fire-resistant thermoset plastic. The corresponding 6 A socket accepts the Europlug, but not others as the 4.5 mm holes are too small to accept the 4.8 mm pins of CEE 7/4, CEE 7/6 or CEE 7/7 plugs.
There were also moulded rubber plugs available for devices up to 16 A similar to CEE 7/17, but with a round base without any notches. They could be altered to fit a CEE 7/5 or CEE 7/3 socket by cutting notches with a sharp knife.
Swiss SN 441011 (Type J)
The Swiss standard, also used in Liechtenstein and Rwanda (and in other countries alongside other standards) is SN 441011 (until 2019 SN SEV 1011) Plugs and socket-outlets for household and similar purposes. The standard defines a hierarchical system of plugs and sockets with two, three and five pins, and 10 A or 16 A ratings. Sockets will accept plugs with the same or fewer pins and the same or lower ratings.
The standard also includes three-phase devices rated at 250 V (phase-to-neutral) / 440 V (phase-to-phase). It does not require the use of child protective shutters. The standard was first described in 1959.
10 A plugs and sockets (Type J)
SEV 1011 defines a "Type 1x" series of 10 A plugs and sockets. The type 11 plug is unearthed, with two 4 mm diameter round pins spaced 19 mm apart. The type 12 plug adds a central 4 mm diameter round earth pin, offset by 5 mm. The type 12 socket has no recess, while the type 13 socket is recessed. Both sockets will accept type 11 and type 12 plugs, and also the 2.5 A Europlug. Earlier type 11 & 12 plugs had line and neutral pins without sleeved pins, which present a shock hazard when partially inserted into non-recessed sockets. The IEC type J designation refers to SEV 1011's type 12 plugs and type 13 sockets.
Unique to Switzerland is a three-phase power socket compatible with single-phase plugs: The type 15 plug has three round pins, of the same dimensions as type 12, plus two smaller flat rectangular pins for two additional power phases. The type 15 socket is recessed, and has five openings (three round and two flat rectangular). It will accept plugs of types 11, 12, 15 and the Europlug.
16 A plugs and sockets
SEV 1011 also defines a "Type 2x" series of 16 A plugs and sockets. These are the same as their 10 A "Type 1x" counterparts, but replace the round pins with 4 mm × 5 mm rectangular pins. The sockets will accept "Type 1x" plugs. The unearthed type 21 plug has two rectangular pins, with centres 19 mm apart. The type 23 plug adds a central rectangular earth pin, offset by 5 mm. The recessed type 23 socket will accept plugs of types 11, 12, 21, 23 and the Europlug.
Again, the three-phase power socket is compatible with single-phase plugs, either of 10 A or 16 A ratings: The type 25 plug has three rectangular pins of the same dimensions as type 23, plus two rectangular pins of the same dimensions as type 15. The corresponding type 25 socket is recessed and will accept plugs of types 11, 12, 15, 21, 23, 25 and the Europlug.
Regulation of adaptors and extensions
A 2012 appendix to SEV 1011:2009, SN SEV 1011:2009/A1:2012 Plugs and socket-outlets for household and similar purposes – A1: Multiway and intermediate adaptors, cord sets, cord extension sets, travel adaptors and fixed adaptors defines the requirements applicable to multiway and intermediate adaptors, cord sets, cord extension sets, and travel and fixed adaptors, it covers the electrical safety and user requirements, including the prohibition of stacking (the connection of one adaptor to another). Non-conforming products must be withdrawn from the Swiss market before the end of 2018.
Pictures
Thai three-pin plug TIS 166-2549 (Type O)
Thai Industrial Standard (TIS) 166-2547 and its subsequent update TIS 166-2549 replaced prior standards which were based on NEMA 1-15 and 5-15, as Thailand uses 220 V electricity. The plug has two round power pins 4.8 mm in diameter and 19 mm in length, insulated for 10 mm and spaced 19 mm apart, with an earthing pin of the same diameter and 21.4 mm in length, located 11.89 mm from the line connecting the two power pins. The earth pin spacing corresponds to that of NEMA 5 and provides compatibility with prior hybrid three-pin sockets, which accept NEMA 1-15, NEMA 5-15 and Europlugs, all of which have been variably used in Thailand. The hybrid socket is also defined in TIS 166-2547, in addition to a plain three-round-pin socket, with plans to replace the former and phase out support for NEMA-compatible plugs. Sockets are polarised (as in NEMA 5-15).
The plug is similar to, but not interchangeable with, the Israeli SI32 plug. The Thai plug is designated as "Type O" at IEC World Plugs.
Special purpose plugs and sockets
Special purpose sockets may be found in residential, industrial, commercial or institutional buildings. Examples of systems using special purpose sockets include:
"Clean" (low electrical noise) earth for use with computer systems,
Device for Connection of Luminaires (DCL) is a European standard for ceiling- and hanging light fixtures.
Emergency power supply,
Uninterruptible power supply for critical or life-support equipment,
Isolated power for medical instruments, tools used in wet conditions, or electric razors,
"Balanced" or "technical" power used in audio and video production studios,
Theatrical lighting,
CEE 17 are a series of industrial grade (IP44) three-phase "pin & sleeve" connectors for industrial purposes, carpentry- and gardening appliances and also used as a weather-resistant connector for outdoor usage, like Caravans, Motorhomes, camper vans and tents for mains hook-up at camp-sites.
Sockets for electric clothes dryers, electric ovens, and air conditioners with higher current rating.
Special-purpose sockets may be labelled or coloured to identify a reserved use of a system, or may have keys or specially shaped pins to prevent use of unintended equipment.
Single-phase electric stove plugs and sockets
The plugs and sockets used to power electric stoves from a single-phase line have to be rated for greater current values than those used with three-phase supply because all the power has to be transferred through two contacts, not three. If not hardwired to the supply, electric stoves may be connected to the mains with an appropriate high power connector. Some countries do not have wiring regulations for single-phase electric stoves. In Russia, an electric stove can often be seen connected with a 25 or 32 A connector.
In Norway a 25 A grounded connector, rectangular shaped with rounded corners, is used for single-phase stoves. The connector has three rectangular pins in a row, with the grounding pin longer than other two. The corresponding socket is recessed to prevent shocks. The Norwegian standard is NEK 502:2005 – standard sheet X (socket) and sheet XI (plug). They are also known as the two pole and earth variants of CEE 7/10 (socket) and CEE 7/11 (plug).
Shaver supply units
National wiring regulations sometimes prohibit the use of sockets adjacent to water taps, etc. A special socket, with an isolation transformer, may allow electric razors to be used near a sink. Because the isolation transformer is of low rating, such outlets are not suitable to operate higher-powered appliances such as hair dryers.
An IEC standard 61558-2-5, adopted by CENELEC and as a national standard in some countries, describes one type of shaver supply unit. Shaver sockets may accept multiple two-pin plug types including Australian (Type I) and BS 4573. The isolation transformer often includes a 115 V output accepting two-pin US plugs (Type A). Shaver supply units must also be current limited, IEC 61558-2-5 specifies a minimum rating of 20 VA and maximum of 50 VA. Sockets are marked with a shaver symbol, and may also say "shavers only".
Isolation transformers and dedicated NEMA 1-15 shaver receptacles were once standard installation practice in North America, but now a GFCI receptacle is used instead. This provides the full capacity of a standard receptacle but protects the user of a razor or other appliance from leakage current.
Differences between BS4573 Type C and Europlug Type C. The BS4573 plug has round 5mm contacts, spacing 16mm. The Euro-plug has 4mm contacts, spacing 19mm. In order to plug a Europlug into a BS4573 socket, an adaptor is required.
Comparison of standard types
Unusual types
Lampholder plug
A lampholder plug fits into a light socket in place of a light bulb to connect appliances to lighting circuits. Where a lower rate was applied to electric power used for lighting circuits, lampholder plugs enabled the consumers to reduce their electricity costs.
Lampholder plugs are rarely fused. Edison screw lampholder adaptors (for NEMA 1-15 plugs) are still commonly used in the Americas.
Soviet adaptor plugs
Some appliances sold in the Soviet Union had a flat unearthed plug with an additional pass-through socket on the top, allowing a stacked arrangement of plugs. The usual Soviet apartment of the 1960s had very few sockets, so this design was very useful, but somewhat unsafe; the brass cylinders of the secondary socket were uncovered at the ends (to allow them to be unscrewed easily), recessed by only 3 mm, and provided bad contact because they relied on the secondary plug's bisected expanding pins. The pins of the secondary plug (which lacked insulation sleeves) could not be inserted into the cylindrical sockets completely, leaving a 5 mm gap between the primary and secondary plugs. The adaptors were mostly used for low power appliances (for example, connecting both a table lamp and a radio to a socket).
UK Walsall Gauge plug
Unlike the standard BS 1363 plugs found in the UK, the earth pin is on a horizontal axis and the live and neutral pins on a vertical axis. This style of plug/socket was used by university laboratories (from batteries) and the BBC, and is still in use in parts of the London Underground for 110VAC voltage supply. In the 1960s they were used for 240 V DC in the Power laboratory of the Electrical Engineering department of what was then University College, Cardiff. Power was supplied by the public 240 V DC mains which remained available in addition to the 240 V AC mains until circa 1969, and thereafter from in-house rectifiers. They were also used in the Ministry of Defence Main Building inside circuits powered from the standby generators to stop staff from plugging in unauthorised devices. They were also known to be used in some British Rail offices for the same reason.
Italian BTicino brand Magic Security connector
In the 1960s, the Italian firm BTicino introduced an alternative to the Europlug or CEI 23-16 connectors then in use, called Magic Security. The socket is rectangular, with lateral key pins and indentations to maintain polarisation, and to prevent insertion of a plug with different current ratings. Three single-phase general purpose connectors were rated 10 A, 16 A and 20 A; and a three-phase industrial connector rated 10 A; all of them have different key-pin positioning so plugs and sockets cannot be mismatched. The socket is closed by a safety lid (bearing the word Magic on it) which can be opened only with an even pressure on its surface, thus preventing the insertion of objects (except the plug itself) inside the socket. The contacts are positioned on both sides of the plug; the plug is energised only when it is inserted fully into the socket.
The system is not compatible with Italian CEI plugs, nor with Europlugs. Appliances were never sold fitted with these security plugs, and the use of adaptors would defeat the safety features, so the supplied plugs had to be cut off and replaced with the security connector. Even so, the Magic security system had some success at first because its enhanced safety features appealed to customers; standard connectors of the day were not considered safe enough. The decline of the system occurred when safety lids similar to the Magic type were developed for standard sockets.
In Italy, the system was never definitively abandoned. Though very rarely seen today, it is still marked as available in BTicino's catalogue, (except for the three-phase version, which stopped being produced in July 2011).
In Chile, 10 A Magic connectors are commonly used for computer/laboratory power networks, as well as for communications or data equipment. This allows delicate electronics equipment to be connected to an independent circuit breaker, usually including a surge protector or an uninterruptible power supply backup. The different style of plug makes it more difficult for office workers to connect computer equipment to a standard unprotected power line, or to overload the UPS by connecting other office appliances.
In Iceland, Magic plugs were widely used in homes and businesses alongside Europlug and Schuko installations. Their installation in new homes was still quite common even in the late 1980s.
| Technology | Power transmission | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.