id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
166741 | https://en.wikipedia.org/wiki/Cirrostratus%20cloud | Cirrostratus cloud | Cirrostratus is a high-altitude, very thin, generally uniform stratiform genus-type of cloud. It is made out of ice-crystals, which are pieces of frozen water. It is difficult to detect and it can make halos. These are made when the cloud takes the form of thin cirrostratus nebulosus. The cloud has a fibrous texture with no halos if it is thicker cirrostratus fibratus. On the approach of a frontal system, the cirrostratus often begins as nebulous and turns to fibratus. If the cirrostratus begins as fragmented of clouds in the sky it often means the front is weak. Cirrostratus is usually located above 5.5 km (18,000 ft). Its presence indicates a large amount of moisture in the upper troposphere. Clouds resembling cirrostratus occasionally form in polar regions of the lower stratosphere. Polar stratospheric clouds can take on this appearance when composed of tiny supercooled droplets of water or nitric acid.
Cirrostratus clouds sometimes signal the approach of a warm front if they form after cirrus and spread from one area across the sky, and thus may be signs that precipitation might follow in the next 12 to 24 hours or as soon as 6–8 hours if the front is fast moving. If the cirrostratus is broken fibratus, it can mean that the front is weak and that stratus rather than nimbostratus will be the precipitating cloud (meaning drizzle or snow grains instead of moderate rain or snow). Cumulus humilis or stratocumulus clouds are often found below cirrostratus formations, due to the stable air associated with cirrostratus creating an inversion and restricting convection, causing cumuliform clouds to become flattened. Contrails also tend to spread out and can be visible for up to an hour in cirrostratus.
The phrase "milky sunshine" is often, as well as referring to haze or light mist, used to refer to the milky look of the sky when cirrostratus is present.
Species: Cirrostratus fibratus (Cs fib) is a high fibrous sheet similar to cirrus but with less detached semi-merged filaments. It is reported in the SYNOP code as CH8 or as CH5 or 6 (depending on the amount of sky covered) if increasing in amount. If the high cloud covers the entire sky and takes on the form of a featureless veil, it is classified as cirrostratus of the species nebulosus (Cs neb) and is coded CH7.
Varieties: Cirrostratus species have no opacity-based varieties as they are always translucent. Two pattern-based varieties are sometimes seen with the species fibratus. These are the closely spaced duplicatus and wavy undulatus types similar to those seen with cirrus fibratus. Pattern-based varieties are not commonly associated with the species nebulosus due to its lack of features.
Supplementary features: Cirrostratus produces no precipitation or virga, and is not accompanied by any accessory clouds.
Genitus mother clouds: Cirrostratus fibratus cirrocumulogenitus sometimes appears as the latter cloud flattens and loses some of its stratocumuliform structure. Cirrostratus fibratus cumulonimbogenitus may form if the cirriform top of a mature thundercloud spreads and flattens sufficiently to become a high stratiform cloud.
Mutatus mother clouds: Cirrostratus fibratus cirromutatus or cirrocumulomutatus are the result of a complete transformation from cirrus and cirrocumulus genus types. Cirrostratus nebulosis altostratomutatus results when a high grey nebulous altostratus layer thins out into a whitish layer of featureless high cloud.
| Physical sciences | Clouds | Earth science |
166745 | https://en.wikipedia.org/wiki/Stratus%20cloud | Stratus cloud | Stratus clouds are low-level clouds characterized by horizontal layering with a uniform base, as opposed to convective or cumuliform clouds formed by rising thermals. The term stratus describes flat, hazy, featureless clouds at low altitudes varying in color from dark gray to nearly white. The word stratus comes from the Latin prefix Strato-, meaning "layer". Stratus clouds may produce a light drizzle or a small amount of snow. These clouds are essentially above-ground fog formed either through the lifting of morning fog or through cold air moving at low altitudes. Some call these clouds "high fog" for their fog-like form.
Formation
Stratus clouds form when weak vertical currents lift a layer of air off the ground and it depressurizes, following the lapse rate. This causes the relative humidity to increase due to the adiabatic cooling. This occurs in environments where atmospheric stability is abundant.
Description
Stratus clouds look like featureless gray to white sheets of cloud. They can be composed of water droplets, supercooled water droplets, or ice crystals depending upon the ambient temperature.
Sub-forms
Species
Stratus nebulosus clouds appear as a featureless or nebulous veil or layer of stratus clouds with no distinctive features or structure. They are found at low altitudes, and are a good sign of atmospheric stability, which indicates continuous stable weather. Stratus nebulosus may produce light rain and drizzle or flakes of snow. Stratus fractus clouds on the other hand, appear with an irregular shape, and forms with a clearly fragmented or ragged appearance. They mostly appear under the precipitation of major rain-bearing clouds; these are nimbostratus and cumulonimbus clouds, and are classified as types of pannus clouds. Stratus fractus can also form beside mountain slopes, without the presence of nimbus clouds (clouds that precipitate), and their color can be from dark grey to almost white.
Opacity-based varieties
Stratus fractus are not divided into varieties, but stratus nebulosus on the other hand, are divided into two. The Stratus opacus variety appears as a nebulous or milky sheet of the nebulosus species, but are opaque enough to block the sun from view. Stratus Translucidus is another variety of the nebulosus species. These clouds are considered more thin than the opacus variety because this cloud is rather translucent, allowing the position of the Sun or Moon to be observed from Earth's surface.
Pattern-based variety
Stratus clouds only have one pattern-based variety. This is the stratus undulatus variety. Mild undulations can be observed from this cloud, only associated by the nebulosus species. Though rare, this cloud formation is caused by disturbances on the gentle wind shear. Stratus undulatus clouds are more common on stratus stratocumulomutatus clouds where the wind is stronger as height increases.
Genitus mother clouds
Stratus cumulogenitus clouds occur when the base of cumulus clouds spreads, creating a nebulous sheet of stratiform clouds. This can also occur on nimbostratus clouds (stratus nimbostratogenitus) and on cumulonimbus clouds (stratus cumulonimbogenitus). Stratus fractus clouds can also form under the base of precipitation-bearing clouds and are classified as pannus clouds. Stratus clouds may also form from formation mechanisms that are not typical for the cloud type, for example, Stratus homogenitus, which are stratus formed by human activity, Stratus cataractagenitus, which are formed from the spray of waterfalls, and Stratus silvagenitus, which are formed by evaporation or evapotranspiration occurring in a forest.
Mutatus mother cloud
Stratus only has one mutatus mother cloud. Stratus stratocumulomutatus clouds occur when stratocumulus opacus patches fuse to create a stratiform layer.
Accessory clouds and supplementary feature
Stratus clouds do not produce accessory clouds, but a supplementary feature praecipitatio is derived from Latin, which means "precipitation". Stratus clouds are generally too low to produce virga, or rain shears that evaporate before reaching the ground, although higher stratus clouds can produce it.
Forecast
A stratus cloud can form from stratocumulus spreading out under an inversion, indicating a continuation of prolonged cloudy weather with drizzle for several hours and then an improvement as it breaks into stratocumulus. Stratus clouds can persist for days in anticyclone conditions. It is common for a stratus to form on a weak warm front, rather than the usual nimbostratus.
Effects on climate
According to Sednev, Menon, and McFarquhar, Arctic stratus and other low-level clouds form roughly 50% of the annual cloud cover in Arctic regions, causing a large effect on the energy emissions and absorptions through radiation.
Relation to other clouds
Cirrostratus clouds
Cirrostratus clouds, a very high ice-crystal form of stratiform clouds, can appear as a milky sheen in the sky or as a striated sheet. They are sometimes similar to altostratus and are distinguishable from the latter because the Sun or Moon is always clearly visible through transparent cirrostratus, in contrast to altostratus which tends to be opaque or translucent. Cirrostratus come in two species, fibratus and nebulosus. The ice crystals in these clouds vary depending upon the height in the cloud. Towards the bottom, at temperatures of around to , the crystals tend to be long, solid, hexagonal columns. Towards the top of the cloud, at temperatures of around to , the predominant crystal types are thick, hexagonal plates and short, solid, hexagonal columns. These clouds commonly produce halos, and sometimes the halo is the only indication that such clouds are present. They are formed by warm, moist air being lifted slowly to a very high altitude. When a warm front approaches, cirrostratus clouds become thicker and descend forming altostratus clouds, and rain usually begins 12 to 24 hours later.
Altostratus clouds
Nimbostratus clouds
Stratocumulus clouds
A stratocumulus cloud is another type of a cumuliform or stratiform cloud. Like stratus clouds, they form at low levels; but like cumulus clouds (and unlike stratus clouds), they form via convection. Unlike cumulus clouds, their growth is almost completely retarded by a strong inversion, causing them to flatten out like stratus clouds and giving them a layered appearance. These clouds are extremely common, covering on average around twenty-three percent of the Earth's oceans and twelve percent of the Earth's continents. They are less common in tropical areas and commonly form after cold fronts. Additionally, stratocumulus clouds reflect a large amount of the incoming sunlight, producing a net cooling effect. Stratocumulus clouds can produce drizzle, which stabilizes the cloud by warming it and reducing turbulent mixing.
Sources
| Physical sciences | Clouds | Earth science |
166784 | https://en.wikipedia.org/wiki/Factory | Factory | A factory, manufacturing plant or production plant is an industrial facility, often a complex consisting of several buildings filled with machinery, where workers manufacture items or operate machines which process each item into another. They are a critical part of modern economic production, with the majority of the world's goods being created or processed within factories.
Factories arose with the introduction of machinery during the Industrial Revolution, when the capital and space requirements became too great for cottage industry or workshops. Early factories that contained small amounts of machinery, such as one or two spinning mules, and fewer than a dozen workers have been called "glorified workshops".
Most modern factories have large warehouses or warehouse-like facilities that contain heavy equipment used for assembly line production. Large factories tend to be located with access to multiple modes of transportation, some having rail, highway and water loading and unloading facilities. In some countries like Australia, it is common to call a factory building a "Shed".
Factories may either make discrete products or some type of continuously produced material, such as chemicals, pulp and paper, or refined oil products. Factories manufacturing chemicals are often called plants and may have most of their equipment – tanks, pressure vessels, chemical reactors, pumps and piping – outdoors and operated from control rooms. Oil refineries have most of their equipment outdoors.
Discrete products may be final goods, or parts and sub-assemblies which are made into final products elsewhere. Factories may be supplied parts from elsewhere or make them from raw materials. Continuous production industries typically use heat or electricity to transform streams of raw materials into finished products.
The term mill originally referred to the milling of grain, which usually used natural resources such as water or wind power until those were displaced by steam power in the 19th century. Because many processes like spinning and weaving, iron rolling, and paper manufacturing were originally powered by water, the term survives as in steel mill, paper mill, etc.
History
Max Weber considered production during ancient and medieval times as never warranting classification as factories, with methods of production and the contemporary economic situation incomparable to modern or even pre-modern developments of industry. In ancient times, the earliest production limited to the household, developed into a separate endeavor independent to the place of inhabitation with production at that time only beginning to be characteristic of industry, termed as "unfree shop industry", a situation caused especially under the reign of the Egyptian pharaoh, with slave employment and no differentiation of skills within the slave group comparable to modern definitions as division of labour.
According to translations of Demosthenes and Herodotus, Naucratis was a, or the only, factory in the entirety of ancient Egypt. A source of 1983 (Hopkins), states the largest factory production in ancient times was of 120 slaves within fourth century BC Athens. An article within the New York Times article dated 13 October 2011 states:
... discovered at Blombos Cave, a cave on the south coast of South Africa where 100,000-year-old tools and ingredients were found with which early modern humans mixed an ochre-based paint.
Although The Cambridge Online Dictionary definition of factory states:
elsewhere:
The first machine is stated by one source to have been traps used to assist with the capturing of animals, corresponding to the machine as a mechanism operating independently or with very little force by interaction from a human, with a capacity for use repeatedly with operation exactly the same on every occasion of functioning. The wheel was invented , the spoked wheel . The Iron Age began approximately 1200–1000 BC. However, other sources define machinery as a means of production.
Archaeology provides a date for the earliest city as 5000 BC as Tell Brak (Ur et al. 2006), therefore a date for cooperation and factors of demand, by an increased community size and population to make something like factory level production a conceivable necessity.
Archaeologist Bonnet, unearthed the foundations of numerous workshops in the city of Kerma proving that as early as 2000 BC Kerma was a large urban capital.
The watermill was first made in the Persian Empire some time before 350 BC. In the third century BC, Philo of Byzantium describes a water-driven wheel in his technical treatises. Factories producing garum were common in the Roman Empire. The Barbegal aqueduct and mills are an industrial complex from the second century AD found in southern France. By the time of the fourth century AD, there was a water-milling installation with a capacity to grind 28 tons of grain per day, a rate sufficient to meet the needs of 80,000 persons, in the Roman Empire.
The large population increase in medieval Islamic cities, such as Baghdad's 1.5 million population, led to the development of large-scale factory milling installations with higher productivity to feed and support the large growing population. A tenth-century grain-processing factory in the Egyptian town of Bilbays, for example, milled an estimated 300 tons of grain and flour per day. Both watermills and windmills were widely used in the Islamic world at the time.
The Venice Arsenal also provides one of the first examples of a factory in the modern sense of the word. Founded in 1104 in Venice, Republic of Venice, several hundred years before the Industrial Revolution, it mass-produced ships on assembly lines using manufactured parts. The Venice Arsenal apparently produced nearly one ship every day and, at its height, employed 16,000 people.
Industrial Revolution
One of the earliest factories was John Lombe's water-powered silk mill at Derby, operational by 1721. By 1746, an integrated brass mill was working at Warmley near Bristol. Raw material went in at one end, was smelted into brass and was turned into pans, pins, wire, and other goods. Housing was provided for workers on site. Josiah Wedgwood in Staffordshire and Matthew Boulton at his Soho Manufactory were other prominent early industrialists, who employed the factory system.
The factory system began widespread use somewhat later when cotton spinning was mechanized.
Richard Arkwright is the person credited with inventing the prototype of the modern factory. After he patented his water frame in 1769, he established Cromford Mill, in Derbyshire, England, significantly expanding the village of Cromford to accommodate the migrant workers new to the area. The factory system was a new way of organizing workforce made necessary by the development of machines which were too large to house in a worker's cottage. Working hours were as long as they had been for the farmer, that is, from dawn to dusk, six days per week. Overall, this practice essentially reduced skilled and unskilled workers to replaceable commodities. Arkwright's factory was the first successful cotton spinning factory in the world; it showed unequivocally the way ahead for industry and was widely copied.
Between 1770 and 1850 mechanized factories supplanted traditional artisan shops as the predominant form of manufacturing institution, because the larger-scale factories enjoyed a significant technological and supervision advantage over the small artisan shops. The earliest factories (using the factory system) developed in the cotton and wool textiles industry. Later generations of factories included mechanized shoe production and manufacturing of machinery, including machine tools. After this came factories that supplied the railroad industry included rolling mills, foundries and locomotive works, along with agricultural-equipment factories that produced cast-steel plows and reapers. Bicycles were mass-produced beginning in the 1880s.
The Nasmyth, Gaskell and Company's Bridgewater Foundry, which began operation in 1836, was one of the earliest factories to use modern materials handling such as cranes and rail tracks through the buildings for handling heavy items.
Large scale electrification of factories began around 1900 after the development of the AC motor which was able to run at constant speed depending on the number of poles and the current electrical frequency. At first larger motors were added to line shafts, but as soon as small horsepower motors became widely available, factories switched to unit drive. Eliminating line shafts freed factories of layout constraints and allowed factory layout to be more efficient. Electrification enabled sequential automation using relay logic.
Assembly line
Henry Ford further revolutionized the factory concept in the early 20th century, with the innovation of the mass production. Highly specialized laborers situated alongside a series of rolling ramps would build up a product such as (in Ford's case) an automobile. This concept dramatically decreased production costs for virtually all manufactured goods and brought about the age of consumerism.
In the mid - to late 20th century, industrialized countries introduced next-generation factories with two improvements:
Advanced statistical methods of quality control, pioneered by the American mathematician William Edwards Deming, whom his home country initially ignored. Quality control turned Japanese factories into world leaders in cost-effectiveness and production quality.
Industrial robots on the factory floor, introduced in the late 1970s. These computer-controlled welding arms and grippers could perform simple tasks such as attaching a car door quickly and flawlessly 24 hours a day. This too cut costs and improved speed.
Some speculation as to the future of the factory includes scenarios with rapid prototyping, nanotechnology, and orbital zero-gravity facilities. There is some scepticism about the development of the factories of the future if the robotic industry is not matched by a higher technological level of the people who operate it. According to some authors, the four basic pillars of the factories of the future are strategy, technology, people and habitability, which would take the form of a kind of "laboratory factories", with management models that allow "producing with quality while experimenting to do it better tomorrow".
Historically significant factories
Venetian Arsenal
Cromford Mill
Lombe's Mill
Soho Manufactory
Portsmouth Block Mills
Slater Mill Historic Site
Lowell Mills
Springfield Armory
Harpers Ferry Armory
Nasmyth, Gaskell and Company also called the Bridgewater Foundry
Baldwin Locomotive Works
Highland Park Ford Plant
Ford River Rouge Complex
Hawthorne Works
Stalingrad Tractor Plant
Triangle Shirtwaist Factory
Siting the factory
Before the advent of mass transportation, factories' needs for ever-greater concentrations of labourers meant that they typically grew up in an urban setting or fostered their own urbanization. Industrial slums developed, and reinforced their own development through the interactions between factories, as when one factory's output or waste-product became the raw materials of another factory (preferably nearby). Canals and railways grew as factories spread, each clustering around sources of cheap energy, available materials and/or mass markets. The exception proved the rule: even greenfield factory sites such as Bournville, founded in a rural setting, developed their own housing and profited from convenient communications systems.
Regulation curbed some of the worst excesses of industrialization's factory-based society, labourers of Factory Acts leading the way in Britain. Trams, automobiles and town planning encouraged the separate development of industrial suburbs and residential suburbs, with labourers commuting between them.
Though factories dominated the Industrial Era, the growth in the service sector eventually began to dethrone them: the focus of labour, in general, shifted to central-city office towers or to semi-rural campus-style establishments, and many factories stood deserted in local rust belts.
The next blow to the traditional factories came from globalization. Manufacturing processes (or their logical successors, assembly plants) in the late 20th century re-focussed in many instances on Special Economic Zones in developing countries or on maquiladoras just across the national boundaries of industrialized states. Further re-location to the least industrialized nations appears possible as the benefits of out-sourcing and the lessons of flexible location apply in the future.
Governing the factory
Much of management theory developed in response to the need to control factory processes. Assumptions on the hierarchies of unskilled, semi-skilled and skilled laborers and their supervisors and managers still linger on; however an example of a more contemporary approach to handle design applicable to manufacturing facilities can be found in Socio-Technical Systems (STS).
Shadow factories
In Britain, a shadow factory is one of a number of manufacturing sites built in dispersed locations in times of war to reduce the risk of disruption due to enemy air-raids and often with the dual purpose of increasing manufacturing capacity. Before World War II Britain had built many shadow factories.
Production of the Supermarine Spitfire at its parent company's base at Woolston, Southampton was vulnerable to enemy attack as a high-profile target and was well within range of Luftwaffe bombers. Indeed, on 26 September 1940 this facility was completely destroyed by an enemy bombing raid. Supermarine had already established a plant at Castle Bromwich; this action prompted them to further disperse Spitfire production around the country with many premises being requisitioned by the British Government.
Connected to the Spitfire was production of its equally important Rolls-Royce Merlin engine, Rolls-Royce's main aero engine facility was located at Derby, the need for increased output was met by building new factories in Crewe and Glasgow and using a purpose-built factory of Ford of Britain in Trafford Park Manchester.
Gallery
| Technology | Basics_6 | null |
166803 | https://en.wikipedia.org/wiki/Shutter%20speed | Shutter speed | In photography, shutter speed or exposure time is the length of time that the film or digital sensor inside the camera is exposed to light (that is, when the camera's shutter is open) when taking a photograph.
The amount of light that reaches the film or image sensor is proportional to the exposure time. of a second will let half as much light in as .
Introduction
The camera's shutter speed, the lens's aperture or f-stop, and the scene's luminance together determine the amount of light that reaches the film or sensor (the exposure). Exposure value (EV) is a quantity that accounts for the shutter speed and the f-number. Once the sensitivity to light of the recording surface (either film or sensor) is set in numbers expressed in "ISOs" (e.g. 200 ISO, 400 ISO), the light emitted by the scene photographed can be controlled through aperture and shutter-speed to match the film or sensor sensitivity to light. This will achieve a good exposure when all the details of the scene are legible on the photograph. Too much light let into the camera results in an overly pale image (or "over-exposure") while too little light will result in an overly dark image (or "under-exposure").
Multiple combinations of shutter speed and f-number can give the same exposure value (E.V.). According to exposure value formula, doubling the exposure time doubles the amount of light (subtracts 1 EV). Reducing the aperture size at multiples of one over the square root of two lets half as much light into the camera, usually at a predefined scale of , , , , , , , , , , and so on. For example, lets four times more light into the camera as does. A shutter speed of s with an aperture gives the same exposure value as a s shutter speed with an aperture, and also the same exposure value as a s shutter speed with an aperture, or s at .
In addition to its effect on exposure, the shutter speed changes the way movement appears in photographs. Very short shutter speeds can be used to freeze fast-moving subjects, for example at sporting events. Very long shutter speeds are used to intentionally blur a moving subject for effect. Short exposure times are sometimes called "fast", and long exposure times "slow".
Adjustments to the aperture need to be compensated by changes of the shutter speed to keep the same (right) exposure.
In early days of photography, available shutter speeds were not standardized, though a typical sequence might have been s, s, s, s, s and s; neither were apertures or film sensitivity (at least 3 different national standards existed). Soon this problem resulted in a solution consisting in the adoption of a standardized way of choosing aperture so that each major step exactly doubled or halved the amount of light entering the camera (, , , , , , etc.), a standardized 2:1 scale was adopted for shutter speed so that opening one aperture stop and reducing the amount of time of the shutter speed by one step resulted in the identical exposure. The agreed standards for shutter speeds are: .
With this scale, each increment roughly doubles the amount of light (longer time) or halves it (shorter time).
Camera shutters often include one or two other settings for making very long exposures:
B (for bulb) keeps the shutter open as long as the shutter release is held.
T (for time) keeps the shutter open (once the shutter-release button had been depressed) until the shutter release is pressed again.
The ability of the photographer to take images without noticeable blurring by camera movement is an important parameter in the choice of the slowest possible shutter speed for a handheld camera. The rough guide used by most 35 mm photographers is that the slowest shutter speed that can be used easily without much blur due to camera shake is the shutter speed numerically closest to the lens focal length. For example, for handheld use of a 35 mm camera with a 50 mm normal lens, the closest shutter speed is s (closest to "50"), while for a 200 mm lens it is recommended not to choose shutter speeds below s. This rule can be augmented with knowledge of the intended application for the photograph, an image intended for significant enlargement and closeup viewing would require faster shutter speeds to avoid obvious blur. Through practice and special techniques such as bracing the camera, arms, or body to minimize camera movement, using a monopod or a tripod, slower shutter speeds can be used without blur. If a shutter speed is too slow for hand holding, a camera support, usually a tripod, must be used. Image stabilization on digital cameras or lenses can often permit the use of shutter speeds 3–4 stops slower (exposures 8–16 times longer).
Shutter priority refers to a shooting mode used in cameras. It allows the photographer to choose a shutter speed setting and allow the camera to decide the correct aperture. This is sometimes referred to as Shutter Speed Priority Auto Exposure, or TV (time value on Canon cameras) mode, S mode on Nikons and most other brands.
Creative utility in photography
Shutter speed is one of several methods used to control the amount of light recorded by the camera's digital sensor or film. It is also used to manipulate the visual effects of the final image.
Slower shutter speeds are often selected to suggest the movement of an object in a still photograph.
Excessively fast shutter speeds can cause a moving subject to appear unnaturally frozen. For instance, a running person may be caught with both feet in the air with all indication of movement lost in the frozen moment.
When a slower shutter speed is selected, a longer time passes from the moment the shutter opens till the moment it closes. More time is available for movement in the subject to be recorded by the camera as a blur.
A slightly slower shutter speed will allow the photographer to introduce an element of blur, either in the subject, where, in our example, the feet, which are the fastest moving element in the frame, might be blurred while the rest remains sharp; or if the camera is panned to follow a moving subject, the background is blurred while the subject remains relatively sharp.
The exact point at which the background or subject will start to blur depends on the speed at which the object is moving, the angle that the object is moving in relation to the camera, the distance it is from the camera and the focal length of the lens in relation to the size of the digital sensor or film.
When slower shutter-speeds, in excess of about half a second, are used on running water, the water in the photo will have a ghostly white appearance reminiscent of fog. This effect can be used in landscape photography.
Zoom burst is a technique which entails the variation of the focal length of a zoom lens during a longer exposure. In the moment that the shutter is opened, the lens is zoomed in, changing the focal length during the exposure. The center of the image remains sharp, while the details away from the center form a radial blur, which causes a strong visual effect, forcing the eye into the center of the image.
The following list provides an overview of common photographic uses for standard shutter speeds.
s and less: The fastest speed available in APS-H or APS-C format DSLR cameras (). (Canon EOS 1D, Nikon D1, Nikon 1 J2, D1X, and D1H)
s: The fastest speed available in any 35 mm film SLR camera. (Minolta Maxxum 9xi, s: The fastest speed available in production SLR cameras (), also the fastest speed available in any full-frame DSLR or SLT camera (). Used to take sharp photographs of very fast subjects, such as birds or planes, under good lighting conditions, with an ISO speed of 1,000 or more and a large-aperture lens.
s: The fastest speed available in consumer SLR cameras (); also the fastest speed available in any leaf shutter camera (such as the Sony Cyber-shot DSC-RX1) (). Used to take sharp photographs of fast subjects, such as athletes or vehicles, under good lighting conditions and with an ISO setting of up to 800.
s and s: Used to take sharp photographs of moderately fast subjects under normal lighting conditions.
s and s: Used to take sharp photographs of people in motion in everyday situations. s is the fastest speed useful for panning; it also allows for a smaller aperture (up to ) in motion shots, and hence for a greater depth of field.
s: This speed, and slower ones, are no longer useful for freezing motion. s is used to obtain greater depth of field and overall sharpness in landscape photography, and is also often used for panning shots.
s: Used for panning shots, for images taken under dim lighting conditions, and for available light portraits.
s: Used for panning subjects moving slower than and for available-light photography. Images taken at this and slower speeds normally require a tripod or an image stabilized lens/camera to be sharp.
s and s: This and slower speeds are useful for photographs other than panning shots where motion blur is employed for deliberate effect, or for taking sharp photographs of immobile subjects under bad lighting conditions with a tripod-supported camera.
s, s and 1 s: Also mainly used for motion blur effects and/or low-light photography, but only practical with a tripod-supported camera.
B (bulb) (fraction of second to several hours): Used with a mechanically fixed camera in astrophotography and for certain special effects.
Cinematographic shutter formula
Motion picture cameras used in traditional film cinematography employ a mechanical rotating shutter. The shutter rotation is synchronized with film being pulled through the gate, hence shutter speed is a function of the frame rate and shutter angle.
Where E = shutter speed (reciprocal of exposure time in seconds), F = frames per second, and S = shutter angle:
, for E'' in reciprocal seconds
With a traditional shutter angle of 180°, film is exposed for second at 24 frame/s. To avoid effect of light interference when shooting under artificial lights or when shooting television screens and computer monitors, s (172.8°) or s (144°) shutter is often used.
Electronic video cameras do not have mechanical shutters and allow setting shutter speed directly in time units. Professional video cameras often allow selecting shutter speed in terms of shutter angle instead of time units, especially those that are capable of overcranking or undercranking.
| Technology | Photography | null |
166829 | https://en.wikipedia.org/wiki/Choking | Choking | Choking, also known as foreign body airway obstruction (FBAO), is a phenomenon that occurs when breathing is impeded by a blockage inside of the respiratory tract. An obstruction that prevents oxygen from entering the lungs results in oxygen deprivation. Although oxygen stored in the blood and lungs can keep a person alive for several minutes after breathing stops, choking often leads to death.
Around 4,500 to 5,000 choking-related deaths occur in the United States every year. Deaths from choking most often occur in the very young (children under three years old) and in the elderly (adults over 75 years). Foods that can adapt their shape to that of the pharynx (such as bananas, marshmallows, or gelatinous candies) are more dangerous. Various forms of specific first aid are used to address and resolve choking.
Choking is the fourth leading cause of unintentional injury death in the United States. Many episodes go unreported because they are brief and resolve without needing medical attention. Of the reported events, 80% occur in people under 15 years of age, and 20% occur in people older than 15 years of age. Worldwide, choking on a foreign object resulted in 162,000 deaths (2.5 per 100,000) in 2013, compared with 140,000 deaths (2.9 per 100,000) in 1990.
Signs and symptoms
Choking victims may present very subtly, especially in the setting of long term foreign body aspiration. Cough is seen in 80% of foreign body aspiration cases, and shortness of breath is seen in 25%. People may be unable to speak, attempt to use hand signals to indicate they are choking, attempt to force vomiting, or clutch at their throat.
History of episode
An observed or recalled episode of choking, with sudden onset of any of the below respiratory and skin signs and symptoms while eating or handling small objects, is seen in around 90% of choking episodes. Initial episodes typically last seconds to several minutes, but can be followed by symptom improvement that can be mistaken as resolution.
Respiratory
Initial respiratory symptoms can include involuntary cough, gurgling, gagging, shortness of breath, labored breathing, or wheezing. Children often present with excessive drooling and stridor (high pitched breathing sounds). Classic triad of choking symptoms in children is coughing, wheezing, and diminished breath sounds, however, a 10-year review showed that this grouping of symptoms was present together in only about 60% of patients.
Loss of consciousness may occur if breathing is not restored. In the setting of lower airway aspiration, patients may develop pneumonia like symptoms such as fever, chest pain, foul smelling sputum, or blood in sputum (hemoptysis). In the case of long term foreign body aspiration, patients may present with signs of lobar pneumonia or pleural effusion.
The time a choking victim is still alive without brain damage can vary, but typically brain damage can occur when the patient remains without air for approximately three minutes (it is variable). Death can occur if breathing is not restored in six to ten minutes (varies depending on the person). However, life can be extended by using cardiopulmonary resuscitation for unconscious victims of choking (see more details further below).
Skin
The face could turn blue (cyanosis) from lack of oxygen if breathing is not restored. Cyanosis may also be seen on the fingertips. In a healthy child or adult, this sign is highly sensitive, but is only observed in 15-20% of choking episodes.
Causes
Choking occurs when a foreign body blocks the airway. This obstruction can be located in the pharynx, the larynx, trachea, or lower respiratory tract. The blockage can be either partial (insufficient air passes through to the lungs) or complete (complete blockage of airflow).
Foods that are small, round, or hard pose a high risk of choking. Examples include hard candy, chunks of cheese or hot dogs, nuts, grapes, marshmallows, and popcorn.
Among children, the most common causes of choking are food, coins, toys, and balloons. In one study, peanuts were the most common object found in the airway of children evaluated for suspected foreign body aspiration. Small, round non-food objects such as balls, marbles, toys, and toy parts are also associated with a high risk of choking death because of the potential to completely block a child's airway. Children younger than age three are especially at risk of choking due to lack of fully developed chewing habits, and the tendency to insert object in their mouth as they explore the environment. Because a child's airway is smaller in diameter than that of an adult's, smaller objects can more often cause airway obstruction in children. Additionally, infants and young children generate a less forceful cough than adults, so coughing may not be as effective in relieving airway obstruction.
Risk factors of foreign body airway obstruction for people of any age include the use of alcohol or sedatives, procedures involving the oral cavity or pharynx, oral appliances, or medical conditions that cause difficulty swallowing or impair the cough reflex. Conditions that can cause difficulty swallowing and/or impaired coughing include neurological conditions such as stroke, Alzheimer's disease, or Parkinson's disease. In older adults, risk factors also include living alone, wearing dentures, and having difficulty swallowing. Children and adults with neurological, cognitive, or psychiatric disorders are at an increased risk of choking and may experience a delay in diagnosis because there may not be a known history of a foreign body entering the airway.
Choking on food is only one type of airway obstruction; others include blockage due to tumors, swelling and inflammation of the airway tissues (from organic foreign bodies or another reason), and compression of the laryngopharynx, larynx, or vertebrate trachea in strangulation. Foreign bodies can also enter the reparatory tract through the chest wall, such as in the setting of a gunshot injury.
Diagnosis
Recognition and diagnosis of choking primarily involves identification of the signs and symptoms like coughing and wheezing (see Signs and Symptoms). Immediate recognition of the symptoms is important, but based on the short length of some episodes, diagnosis during the first 24 hours only occurs in 50–60% of cases.
After the initial episode, choking can lead to an obstruction of the airway that prompts further diagnostic steps. For choking episodes that require emergent evaluation by a doctor, several tools can be used for diagnosis, each with their advantages and drawbacks.
Imaging and visualization methods
Bronchoscopy
According to the American Heart Association, bronchoscopy is a reliable method used to visualize the cause of choking when not resolved via oxygen and supportive care. Bronchoscopy also is a crucial tool in foreign body removal after supportive care has been provided and the person who is choking is stable. However, bronchoscopy is an invasive form of imaging and intervention in comparison to the below diagnostic tools, and requires sedation to perform.
X-ray
An X-ray uses high-frequency electromagnetic radiation to visualize the human body. In the case of choking, a chest X-ray is obtained to visualize the lungs and upper airway. However, many objects do not show up on X-ray (radiolucent). About 10% objects are radio-opaque and can be visualized using X-ray. X-rays are more accessible than other imaging modalities but expose a person to radiation. In cases where X-ray is inconclusive, fluoroscopy may be able to demonstrate radiolucent or smaller foreign bodies. Chest fluoroscopy is a real-time X-ray image (sometimes referred to as an X-ray movie) to view breathing and coughing.
Computerized tomography (CT)
A CT scan uses a tube with multiple X-ray machines to build a 3D image from 2D X-ray images of multiple cross-sections. Radiolucent objects can be better captured on CT than X-ray. Additionally, modern imaging analysis software allows for airway reconstruction following a chest CT, creating a model of the airway network in the lungs that can better visualize the exast location of a foreign body. Since a CT is multiple X-rays, the exposure to radiation is significantly greater.
Magnetic resonance imaging (MRI)
An MRI scan uses radio-frequency pulse under a magnetic field to create a high-resolution image of the body. MRIs can detect foreign bodies with higher accuracy than X-ray or CT. MRI does not expose the person to radiation. Drawbacks of MRI include claustrophobia and high cost. For children, sedation may be required to undergo MRI imaging, which is an increased risk when the airway is already potentially compromised.
Treatment
Airway management is used to restore a person's ventilation which consists of severity assessment, procedural planning, and may consist of multiple treatment modalities to restore airway.
Treatments will vary based on severity and stage of airway blockage. In basic airway management, treatment generally consists of anti-choking first aid techniques, such as the Heimlich maneuver. In advanced airway management, complex clinical methods are used.
Basic treatment (first aid)
Basic treatment of choking includes several non-invasive techniques to help remove foreign bodies from the airways.
General strategy: "five and five"
For a conscious choking victim, most institutions such as the American Heart Association, the American Red Cross and the NHS, recommend the same general protocol of first-aid: encouraging the victim to cough, followed by hard back slaps (as described forward). If these attempts are not effective, the procedure continues with abdominal thrusts (the Heimlich maneuver) or chest thrusts if the victim cannot receive abdominal pressure (as described forward).
If none of these techniques are effective, protocol by various institutions recommend alternating the series of back slaps and series of thrusts (these on the abdomen or chest, depending on the victim), 5 times of each technique and repeat ("five and five"). Some anti-choking devices can also be used to solve choking.
This procedure has modifications for infants (babies under 1 year-old), for the people with problems in the belly as the pregnant or too much obese people, for the disabled victims in wheelchair, for the victims that lay on the bed but are unable to sit down, and for the victims that lay on the floor but are unable to sit down.
In scenarios when the first aid procedures are not resolving the choking, it is necessary to call to emergency medical services, but administration of first aid should be continued until they arrive.
Choking can change the colour in the victim's faces due to lack of oxygen. If they lose consciousness and fall to the ground, it is recommended to avoid panic and begin the appropriate anti-choking resuscitation for unconscious victims or unconscious babies (under 1 year-old).
Each one of the techniques in the first aid protocol against choking are detailed below:
Cough
If the choking victim is conscious and can cough, the American Red Cross and the Mayo Clinic recommend encouraging them to stay calm and continue coughing freely.
Back blows (back slaps)
Many associations, including the American Red Cross and the Mayo Clinic, recommend the use of back blows (back slaps) to aid a choking victim. This technique is performed by bending the choking victim forward as much as possible, even trying to place their head lower than the chest, to avoid the blows driving the object deeper into the person's throat (a rare complication, but possible). The bending is in the back, while the neck should not be excessively bent. It is convenient that one hand supports the victim's chest. Then the back blows are performed by delivering forceful slaps with the heel of the hand on the victim's back, between the shoulder blades.
The back slaps push behind the blockage to expel the foreign object out. In some cases, the physical vibration of the action may cause enough movement to clear the airway.
Abdominal thrusts (Heimlich maneuver)
Abdominal thrusts are performed with the rescuer embracing the belly of the choking victim from behind. Then, the rescuer closes their own dominant hand, grasps it with the other hand, and presses forcefully with them on the area located between the chest and the belly button of the victim, in a direction of in-and-up. This method tries to create enough pressure upwards to expel the object that obstructs the airway. The strength is not focused directly against the ribs, to avoid breaking them. If the first thrust does not solve the choking, it can be repeated several times.
The use of abdominal thrusts is not recommended for infants under 1 year of age due to risk of causing injury, so there are adaptations for babies (see more details further below), but a child that is too big for the babies' adaptations would require normal abdominal thrusts (according to the size of the body). Besides, abdominal thrusts should not be used when the victim's abdomen presents problems to receive them, such as pregnancy or excessive size; in these cases, chest thrusts are advised (see more details further below).
Although it is a well known method for choking intervention, the Heimlich Maneuver is backed by limited evidence and unclear guidelines. The use of the maneuver has saved many lives but can produce deleterious consequences if not performed correctly. This includes rib fracture, perforation of the jejunum, diaphragmatic herniation, among others.
Chest thrusts
When abdominal thrusts cannot be performed on the victim (serious injuries, pregnancy, or belly size that is too large for the rescuer to effectively perform abdominal thrust technique), chest thrusts are advised instead.
Chest thrusts are performed with the rescuer embracing the chest of the choking victim from behind. Then, the rescuer closes the own dominant hand and grasps it with the other hand. This can produce several kinds of fists, but any of them can be valid if it can be placed on the victim's chest without sinking a knuckle too painfully. Keeping the fist with both hands, the rescuer uses it to press forcefully inwards on the lower half of the chest bone (sternum). The pressure is not focused on the very endpoint (named xiphoid process) to avoid breaking it. When the victim is a woman, the zone of the pressure of the chest thrusts would normally be above the level of the breasts. If the first thrust does not solve the choking, it can be repeated several times.
Anti-choking devices
Since 2015, several anti-choking devices were developed and released to the market. They are based on a mechanical vacuum effect, without a power source. Most use an attached mask to make a vacuum from the patient's nose and mouth. The current models of anti-choking devices are quite similar: a direct plunger tool (LifeVac and Willnice) and a vacuum syringe (backward syringe) that also keeps the tongue in place by inserting a tube in the mouth (Dechoker). All three of them have received certification, and they have been reported to be effective in real cases. Other mechanical models are in development, such as Lifewand, which creates a vacuum by direct pressure against the patient's face. However, these products have not been well-studied in clinical trials or pre-hospital settings and literature is relatively sparse given the challenges in trial design.
A 2020 systematic review of the effectiveness of the three devices listed discovered "a more detailed review of the studies demonstrated a very low certainty of evidence for its use", and concluded that "there are many weaknesses in the available data and few unbiased trials that test the effectiveness of anti-choking suction devices resulting in insufficient evidence to support or discourage their use. Practitioners should continue to adhere to guidelines authored by local resuscitation authorities which align with ILCOR recommendations."
As of October 22, 2024, The American Red Cross has updated its guidelines to include antichoking devices which highlighted the LifeVac for its effectiveness in clearing the airway passages. The scientific analyses of the LifeVac revealed a dislodgement rate of 94% during the first attempt, 99.6% on the second attempt, and a 100% success rate on the third attempt. There has so far been no known side effects due to the LifeVac device according to these studies. According to the findings, the evaluation on PubMed records from September 2019 through March 2023 which identified nearly 4,000 documents relating to the study which was significantly lead by the document "LifeVac: A Novel Apparatus to Resuscitate a Choking Victim" which were evaluated by the Journal of the American Red Cross Scientific Advisory Council (American Red Cross).
Some anti-choking devices like Act Fast Anti Choking Trainer are used as training devices by healthcare providers as well as schools in CPR training courses.
Unconscious victims
A choking victim who becomes unconscious must be gently caught before falling and placed lying face-up on a surface. That surface should be firm enough (it is recommended placing a layer of something on the floor and laying the victim above).
A rescuer can then ask for any of the known anti-choking devices that is available, and use it on the unconscious victim. After that, the obstruction would be dislodged, but it could remain into the mouth, which needs a manual removal. If the unconscious victim cannot breath then, or stays in a cardiac arrest, it will require to perform a normal cardiopulmonary resuscitation (CPR), as described below, but only alternating the 30 compressions and the two rescue breaths.
Emergency medical services must be called, if this has not already been done.
It can be also convenient that any rescuer asks for a defibrillator around (an AED, as those devices are very common today), just in case it can be necessary to treat the victim's heart.
Anyway, a choking victim that is already unconscious needs to receive (even with no more delay) an anti-choking cardiopulmonary resuscitation (CPR) for unconscious adults and children. It is not valid for infants less than one year old, who require a special adaptated CPR for unconscious babies (described further below).
The anti-choking cardiopulmonary resuscitation (CPR) for unconscious adults and children is quite similar to any other CPR, but with some modifications:
In a first step, a series of 30 chest compressions are applied on the lower half of the sternum (the bone that is along the middle of the chest from the neck to the belly) at an approximate rhythm of nearly 2 per second.
After that series, the rescuer looks for the obstructing object and, if it is already visible, the rescuer makes a try to extract it, usually by using a finger sweeping (hopefully from the mouth). Normally, the object would be a food bolus (and not the epiglottis, a cartilaginous flap of the throat). It is also possible to try to extract it when it is deeper and not visible, always carefully: using the fingers to take it, or lifting the victim's chin to form a straight way to the throat while the victim is face up (face down in case of the victim's tongue obstructs too much, or on a side with a base under the head) and then grasping or sweeping the stuck object with tools: thin kitchen tweezers, scissors (these used with care), forks and spoons (in a reverse position: introducing their handle) or even a toothpick (if other instruments were too much big for the case); but, anyway, the current protocols do not recommend extracting the obstructing object if it is not visible (a blind extraction), because of the risk to sink it deeper by accident, and because the compressions could move the object outside by themselves (in some cases). Moreover, if any removal is tried and is taking too much time, it may require alternating it with the chest compressions at some moments, without hindering to the extraction. And, whether the object has been found and removed in this step or not, the CPR procedure must pass to the next step and continue until the victims can breathe by themselves or emergency medical services arrive.
In the next step of the CPR, the rescuer applies a rescue breath, pinching the victim's nose and puffing air inside of the mouth. Rescue breaths would usually fail while the obstructing object is blocking the entrance of air. Anyway, it is recommended, additionally, tilting the victim's head up and down, to reposition it trying to open an entrance for the air, and then giving another rescue breath. After the rescue breaths, this resuscitation returns to the 30 initial compressions, in a cycle that repeats continually, until the victim regains consciousness and breathes, or until the object is extracted but a defibrillation is needed to solve a cardiac arrest (read below).
Defibrillation can also be needed, because an unconscious victim of choking can suffer a cardiac arrest at any moment, due to several possible causes. So it is convenient to ask around for a defibrillator (AED device), for trying a defibrillation on a victim that remains in cardiac arrest after having extracted the stuck object (if it has been extracted, and only after that). Those defibrillators are easy to use, as they emit their instructions with voice messages.
Finger sweeping
It is crucial to avoid blindingly sweeping the airway unless there is direct visualization of the airway – in fact, these procedures are advised only to be performed in more controlled environments such as an operating room. In unconscious choking victims, the American Medical Association has previously advocated sweeping the fingers across the back of the throat to attempt to dislodge airway obstructions. Many modern protocols suggest other treatment modalities are superior. Red Cross procedures also advise rescuers not to perform a finger sweep unless an object can be clearly seen in the victim's mouth to prevent driving the obstruction deeper into the victim's airway. Other protocols suggest that if the patient is conscious they will be able to remove the foreign object themselves, or if they are unconscious, the rescuer should place them in the recovery position to allow the drainage of fluids out of the mouth instead of down the trachea due to gravity. There is also a risk of causing further damage (inducing vomiting, for instance) by using a finger sweep technique. There are no studies that have examined the usefulness of the finger sweep technique when there is no visible object in the airway. Recommendations for the use of the finger sweep have been based on anecdotal evidence.
Particular cases
Infants (babies under 1 year-old)
The majority of choking injuries and fatalities occurs in children aged 0–4, highlighting the importance for widespread dissemination of the appropriate anti-choking techniques for these age groups. In fact, it has been shown that increased parental education may decrease choking rates among children.
For infants under 1 year-old, the American Heart Association recommends adapted procedures. The size of the children's body is the most important aspect in determining the correct anti-choking technique. So the normal first aid techniques against choking would be tried in children who are too large for the babies' procedures (or they would be tried as a less appropriated attempt if the rescuer is unable to perform the techniques for babies).
First aid for choking infants alternates a cycle of special back blows (five back slaps) followed by special chest thrusts (five adapted chest compressions). In the back blows maneuver, the rescuer slaps on the baby's back. It is recommended that the baby receive them being slightly leaned upside-down on an inclination. There exist several ways to achieve this:
According to a widely propagated modality: the rescuer sits down on a seat with the baby, and supports the baby with a forearm and its respective hand. The baby's head must be carefully held with that hand (approximately by the face), and kept in a normal position, facing forward, not inclinated. Then the baby's body can be leaned forward upside-down along the rescuer's thighs, and receive the slaps.
As an easier alternative: the rescuer can sit on a bed or sofa, or even the floor, carrying the baby. Next, the rescuer should support the baby's body on the own lap, to lean the baby a bit upside-down at the right or the left of the lap. The baby's head must be in a normal position, facing forward, frontally, and not inclinated. It is always convenient that the baby's chest is supported against something. Then the rescuer would slap the back of the baby.
If the rescuer cannot sit down: at least it is possible to attempt the manoeuver at a low height and over a soft surface. Then the rescuer would support the baby with a forearm and the hand of that side, holding carefully the baby's head with that hand (approximately by the face, but always trying that the baby's head keeps in a normal position, facing forward, not inclinated). The baby's body would be leaned upside-down in that position to receive the slaps. In situations with rescuers who cannot do all of that (as rescuers with disabilities and others), they can still try the normal back blows, supporting the baby's chest with one hand, bending the baby's body, and then giving firm slaps with the other hand.
In the chest thrusts manoeuver, the baby's body is placed lying face up on a surface (it can be the rescuer's thighs, lap or forearm). Then, the rescuer does the compressions pressing with only two fingers on the lower half of the bone that is along the middle of the chest from the neck to the belly (on the chest bone, named sternum, on its part that is the nearest to the belly). Abdominal thrusts are not recommended in children less than one year old because they can cause liver damage.
The back blows and chest thrusts are alternated in cycles of five back blows and five chest compressions until the object comes out of the infant's airway or until the infant becomes unconscious.
If choking is unresolved despite these rescue attempts, it is vital that somebody calls to the emergency medical services and continue first aid until they arrive. An infant can fall unconscious soon, then an anti-choking resuscitation for infants is required (read next).
Unconscious infants
An unconscious infant has to be placed face-up on a firm and horizontal surface (as the floor). The baby's head must be in a straight position, facing frontally, because tilting it too much backwards can close the access to the trachea in infants.
A rescuer can then ask for any of the known anti-choking devices, and try it on the unconscious baby. There can be difficulties because of the little size of the infant. The obstruction can be dislodged, but remaining into the mouth, which would need a manual removal. If the baby cannot breath then, or stays in a cardiac arrest, the rescuer must perform a normal cardiopulmonary resuscitation (CPR), as it is described below, but only alternating the 30 compressions and the 2 rescue breaths.
Emergency medical services must be called, if this has not been done yet.
It can be also convenient that any rescuer asks for a defibrillator near (an AED, as those devices are very common today), just in case it is necessary to treat the baby's heart.
Until emergency services arrive, the American Heart Association recommends starting (even with no more delay) an anti-choking cardiopulmonary resuscitation (CPR) adaptive to infants less than one year old (described below). It is a cycle of resuscitation that alternates compressions and rescue breaths, like in a normal CPR, but with some differences:
The rescuer begins by making 30 compressions, pressing with only two fingers on the lower half of the bone that crosses the middle of the chest from the neck to the belly (on the chest bone, named sternum, on its part that is the nearest to the belly), at an approximate rhythm of nearly 2 per second.
At the end of the round of compressions, the rescuer looks into the mouth for the obstructing object. And, if it is already visible, the rescuer makes a try to extract it (usually using a finger sweep). The rescuer must not confuse a foreign object with the epiglottis: a cartilaginous flap of the throat. It is possible to try to extract the object without seeing it, always carefully: taking it with the fingers, or using a toothpick (maybe, because almost any other tool would be too wide for a baby), but the current protocols do not recommend extracting the object if it is not visible (a blind extraction), because of the risk to sink it deeper by accident, and because the compressions could move the object outside by themselves (in some cases). A rescuer that already knows that the choking object is a bag (or similar) does not need to see the object before trying to extract it (because there is no risk of sinking it much deeper, and it is easy to detect by using the touch carefully). Anyway, if any removal is tried and takes too much time, it may require alternating it with the chest compressions at some moments, without hindering to the extraction. And, being the object extracted or not in this step, this CPR procedure must pass to the next action and continue until the babies can breathe by themselves or emergency medical services arrive.
In the next step of the CPR, the rescuer makes a rescue breath, covering the baby's mouth and nose simultaneously with the own mouth, and puffing air inside a first time. After that first rescue breath, it is recommended tilting the baby's head up and down, trying to open a space for the air in that manner, but leaving it approximately straight again, and then giving an additional rescue breath, for the second time. The rescue breaths usually fail while the object is still blocking, but then the rescuer has only to continue with the next step. Anyway, they can enter and reach the lungs, and then the chest of the baby would be seen rising. If a rescue breath arrived there, it is because the object has been moved to an unknown position that leaves some open space, so it can be useful making the next rescue breaths more softly to avoid moving the object to a new blocking position again, and, in case of those soft rescue breaths are not successful, increasing the strength of blowing in the next ones. The bodies of the babies are delicate, and, when the airway is not clogged, only a little strength in blowing is enough to fill their lungs. The baby's colour would improve after some successful rescue breaths. After the rescue breaths, the rescuer has to return to the 30 initial compressions, repeating the same resuscitation cycle again, continually, until the choking baby regains consciousness and breathe normally, or until the object is extracted but a defibrillation is needed to solve a cardiac arrest (read below).
Defibrillation can also be needed, because a choking infant that is already unconscious can suffer a cardiac arrest at any moment, due to several possible causes. So it is convenient to ask around for a defibrillator (AED device), for trying a defibrillation on a baby that remains in cardiac arrest after having extracted the stuck object (only if the object has been extracted). Those defibrillators are easy to use, as they emit their instructions with voice messages. One of the pads of the defribrillator (any of them) is attached to the baby's chest, and the other pad to the baby's back.
Pregnant or obese people
Some choking victims cannot receive pressure on their bellies. Then, the American Heart Associated recommends substituting the abdominal thrusts for chest thrusts.
These victims can include: patients with serious injuries in the abdomen, pregnant women, and obese patients; however, in the case of the obese victims, if the rescuer is capable enough to effectively wrap their arms around the circumference of the victim's abdomen, it is possible to apply the normal first aid against choking, with abdominal thrusts (see details further above).
Chest thrusts are performed in a similar way to the abdominal thrusts, but with the fist placed on the lower half of the vertical bone that is along the middle of the chest (the chest bone, named sternum), rather than on the abdomen. As a reference, in women, the zone of pressure of the chest thrusts would be normally higher than the breasts. It is convenient to avoid placing the knuckles too painfully. Finally, strong inward thrusts are then applied.
The rest of the first aid protocol is the same, starting with asking the victim to cough freely, and then, if the victim cannot cough, the series of chest thrust are alternated with series of slaps on the back. Those back slaps are applied normally: bending forward the back of the victims and supporting their chest with one hand.
If choking remains unresolved, calling to emergency medical services is vital, but first aid should be continued until they arrive.
When the victims with abdominal problems become unconscious, they need the same anti-choking cardiopulmonary resuscitation procedure that is employed for other unconscious choking victims (see details further above).
Preventively, if a person with abdominal problems (as injury, pregnancy, or too much obesity) is present, placing an anti-choking device nearby can be useful.
In wheelchair-using victims
If the choking victim is a wheelchair-using person, the procedure is similar to how it is for other victims. The main difference is in trying to apply the techniques directly, while the victim is using the wheelchair.
Coughing should be encouraged first before applying the techniques. When the victim cannot cough, it is recommended alternating series of back blows and thrusts, as in other cases.
Back blows (back slaps) can be used after substantially bending forward the back of the victim, and supporting the victim's chest with the other hand.
Abdominal and chest thrusts can also be used. To perform the abdominal thrusts, the rescuer must get behind the wheelchair. Then, the rescuer can embrace the victim's abdomen from behind and above, leaning over the top of the wheelchair's backrest. If this is too difficult, the rescuer can get down and embrace from behind the victim's abdomen and the wheelchair's backrest all together. In narrow spaces that cannot be opened, the position can be achieved by turning the victim to one side. Finally, the rescuer would grasp the own hand with the other, place them between the chest and the belly button of the victim, and apply sudden pressures with them on that zone, in a direction of in-and-up. If the victim cannot receive abdominal thrusts (in cases as having serious injuries in the belly, pregnancy, and others), chest thrusts must be used instead. They are applied while the victim is using the wheelchair too, but making sudden inward pressures on the lower half of the breast bone (sternum), which is placed vertically along the middle of the chest. If the space is too narrow and impossible to widen, the abdominal or chest thrusts can be tried by turning the victim to one side.
Rescuers should alternate between back slaps and chest thrusts repeatedly until the choking is solved, as in other victims.
If choking remains unresolved, calling to emergency medical services is vital, but first aid should be continued until they arrive.
If a wheelchair-using victim of choking becomes unconscious, anti-choking cardiopulmonary resuscitation (CPR) should be performed, which is the same method also used for able-bodied victims of choking. That said, it can be noted that the victim needs to be removed from the wheelchair and laid face-up on an appropriated surface (not too hard or too soft, and it is possible to put a layer of something between the floor and the victim). While they are on their way, the rescuer should apply anti-choking cardiopulmonary resuscitation for unconscious victims (see details further above).
As a preventive measure, it is convenient to avoid placing patients with disabilities in a narrow and encased spaces at mealtimes, as more open spaces allow easier access for rescuers. Besides, placing an anti-choking device nearby is a common safety measure in environments.
On the bed but unable to sit up
If the choking victim is lying in bed, but is conscious and unable to sit up (such as in patients with disabilities or injuries), the first aid would be the same, but performed by sitting the patient on the edge of the bed.
Before adjusting the patient's position, the rescuer encourages the victim to cough freely and as forcefully as they can manage. The victim would do it better by turning to one side. When coughing is too difficult or impossible, the rescuer would sit the victim on the bed's edge, to make coughing easier or to apply the anti-choking manoeuvers (these are required if the victim cannot cough).
This can be achieved grasping the victim by the legs (behind of the knees, or by the calves or ankles) and rotating them until they are out of the bed. Next, the rescuer would sit the victim up on the edge, pulling the shoulders or arms (in the forearms or wrists). Then it is possible to apply the anti-choking manoeuvers from behind: series of back slaps (after leaning the victim forward, and supporting the chest with one hand) and series of abdominal thrusts (sudden compressions on the part of the victim's belly that is between the chest and the belly button, in a direction of in-and-up). When the victim cannot receive abdominal thrusts (in cases as having serious injuries in the belly, pregnancy, and others), the rescuer needs to change them for chest thrusts (sudden pressures inward on the lower half of the breast bone, which is placed vertically along the middle of the chest, from the neck to the belly).
When a rescuer cannot sit the victim up, it is possible to perform chest or abdominal thrusts frontally while the victim is lying on the bed (despite they would be less effective in that horizontal position). They are made by putting one hand on the top of the other and making with both of them strong pressures downwards on the lower half of the breast bone (the sternum), or in a downward-and-frontward direction between the chest and the belly button.
If choking remains unresolved, calling to emergency medical services is vital, but first aid should be continued until they arrive.
When the victims of choking in bed become unconscious, they need the same anti-choking cardiopulmonary resuscitation procedure that is employed for other unconscious choking victims (see details further above).
Preventively, it is important to know that eating while laying in bed increases the risk of choking. When a person with a disability or injury is present, a common measure of prevention is placing an anti-choking device at reach.
On the floor but unable to sit up
It is possible, though rare, that a choking victim would be laying on the floor but conscious. For example, someone having a disability that makes impossible to sit up and to keep standing up on the feet. If this is the case, the first aid is the same, but after sitting the victim on the floor.
Before adjusting the patient's position, the rescuer asks the victim to cough freely and with strength. The victim would cough better by turning to a side. If coughing is too difficult or impossible, the rescuer would sit the victim up, to make it easier or to apply anti-choking maneuvers (these are needed when the victim cannot cough).
A rescuer would sit the victim up by pulling the shoulders or arms (in the forearms or wrists). When the victim is sitting up, the rescuer can sit behind to apply the anti-choking manoeuvers: back slaps (after bending very much the back of the victim, and supporting the chest with one hand) and abdominal thrusts (sudden compressions in a direction of in-and-up, on the part of the victim's belly that is between the chest and the belly button). When victims cannot receive abdominal thrusts properly (as the seriously injured in the belly, and the pregnant women), the rescuer needs to change them for chest thrusts (sudden pressures inward on the lower half of the breast bone, which is placed vertically along the middle of the chest, from the neck to the belly).
In some situations it is impossible to sit the victim up, and then the rescuer can try one of the thrusts techniques to the front of their abdomen and chest (even though this is less effective, it could be the only option and therefore worth a try). Anyway, they can be made by putting one hand on the top of the other and using them to make strong pressures downwards on the lower half of the breast bone (the sternum), or downwards-and-frontwards on the abdomen (between the chest and the belly button).
If choking remains unresolved, calling to emergency medical services is vital, but first aid should be continued until they arrive.
If the victim is unconscious, it is needed the same anti-choking cardiopulmonary resuscitation procedure that is used in other unconscious choking victims (see details further above).
In the prevention of choking, it can be remembered the practice of placing an anti-choking device around people with disabilities.
Seizing victim
Seizing can occur for a multitude of reasons, but is common in those diagnosed with epilepsy. During a seizure, victims may experience strangulation or throat constriction during consciousness. The victim will not have control of their bodily functions and will need someone to create a safe area for them. One should clear a space where the victim can lay down and remove or loosen anything that is around their neck. Then one should turn them on their side as to help them breathe and to avoid potential choking on the saliva.
Self-treatment
First aid anti-choking techniques can be applied on oneself if others are not around to perform general first aid. This can include carrying an approved anti-choking device nearby (see above) or conducting first aid techniques on oneself, mainly by hand:
The most widely recommended manoeuver consists of positioning ones own abdomen over the border of an object: usually a chairback, but it could work on an armchair, railing or countertop, and then driving the abdomen upon the border, making sharp thrusts in an inwards-an-upwards direction. It is possible to place a fist or both fists between the chosen border and the belly, to increase the pressure of the manoeuver and make it easier (depending on the situation). It is also possible trying to fall on the edge, aiming to achieve more pressure in that way. Other variation of this consists in pressing one's own belly with an appropriated object, in an inwards-and-upwards direction.
Additionally, abdominal thrusts can be self-applied only with the hands. This is achieved by making a fist, grasping it with the other hand, and placing them on the area located between the chest and the belly button. Then the body is bent forward and the hands make strong compressions pressing in an inwards-an-upwards direction. One study concluded that the self-administered abdominal thrusts were as effective as those performed by another person.
When certain scenarios make it impossible for self treatment with abdominal thrust (serious injuries, pregnancy, or obesity), the self application of chest thrusts is recommended, although more difficult. This would be achieved by leaning the body forward, making a fist, grasping it with the other hand, and doing strong compressions inwards with both of them on the lower half of the chest bone (the sternum, the bone that crosses vertically the middle of the chest). It is convenient to relax the chest for a better reception. Other variation of this is the use of an appropriated object to press inwards in the same point, being equally convenient to receive the compressions when the chest is relaxed.
Making attempts to cough, when it is possible, can also aid in clearing the airway.
Alternatively, multiple sources of evidence suggest applying the head-down (inverse) position. is a promising self treatment. To perform this manoeuver, put your hands on the floor and then place the knees on an upper seat (as on a bed, a sofa, or an armchair). Additional movements up or down can be attempted in this position.
Advanced treatment
There are many advanced medical treatments available to relieve choking or airway obstruction, including the removal of a foreign object with the help of a laryngoscope or bronchoscope. The use of any commercial approved anti-choking device, if it is available nearby, may be a more abrupt solution, but brief.
A cricothyrotomy may be performed as an emergency procedure when the stuck object cannot be removed. This is an intervention that involves severing a little opening in the patient's neck (between the thyroid cartilage and the cricoid cartilage, until reaching the trachea) and inserting there a tube to introduce air through it, bypassing the upper airways. Usually, this procedure is only performed by someone with knowledge about it and surgical skills, when the patient is already unconscious.
Prevention
The time to react to choking is quite brief, and choking can be lethal in the worst cases. For these reasons, it is convenient to prevent it, so that it does not happen.
General prevention
Choking usually happens by swallowing mouthfuls that are too large or too abundant, and have been badly chewed. To reduce this risk, food is split into pieces of a moderate size, and chewed thoroughly before swallowing. Whenever a food can be chewed, it must be chewed, no matter whatever it is, even if it is very soft or gelatinous (such as creams, jellies and soft desserts).
The most dangerous foods for choking are the dry ones, the doughy ones, and the elongated ones. It is useful to have some liquid near, so it can be drunk and help to finish swallowing (before choking has happened and is already complete).
To swallow correctly, it is recommended that the neck be in a normal position, with the head looking forward, and being sitting down or standing up (not lying down or too reclined).
Distractions and being absent-minded increase the risk of choking, as when one laughs or does an activity at the same time. For this reason, eating while under the effects of sleep (not completely awake) also increases the possibilities of choking. The danger also increases in the case of being under the influence of alcohol, drugs or some medications that affect to perception or reaction capabilities. Eating in mouthfuls require some care.
It is convenient to place anti-choking devices in public sites, events, risk areas and homes, to solve the cases of swallowing that can happen there.
Prevention for babies and children
All young children require care in eating, and they must learn to chew their food completely to avoid choking. Feeding them while they are running, playing, laughing, etc. increases the risk of choking. Caregivers must supervise children while eating or playing.
Pediatricians and dentists can provide information on various age groups to parents and caregivers about which food and toys are appropriate to prevent choking.
It is recommended waiting until 6 months of age before introducing solid foods to infants (according to the American Academy of Pediatricians). Caregivers should avoid giving children younger than 5 years-old foods that pose a high risk of choking, such as hot dog pieces, bananas, cheese sticks, cheese chunks, hard candy, nuts, grapes, marshmallows, or popcorn. Later, when they are accustomed to these foods, it is recommended to serve them split into small pieces. Some foods as hot dogs, bananas, or grapes are usually split lengthwise, sliced, or both (being the cut into slices the main part for safety in many long-shaped foods).
Children readily put small objects into their mouths (deflated balloons, marbles, small pieces, buttons, coins, button batteries, etc.), which can lead to choking. A complicated obstruction for babies is choking on deflated balloons (including preservatives) or plastic bags. This also includes the nappy sacks, used for wrapping the dirty diapers, which are sometimes dangerously placed near the babies. To prevent children from swallowing things, precautions should be taken in the environment to keep dangerous objects out of their reach. Small children must be supervised closely and taught to avoid putting things into their mouths. Toys and games may indicate on their packages the ages for which they are safe. In the US, children's toy and product manufacturers are required by law to apply appropriate warning labels to their packaging, but toys that are resold may not have them. Caregivers can try to prevent choking by considering the features of a toy (such as size, shape, consistency and small parts) before giving it to a child. Children's products that are found to pose a choking risk can be taken off the market.
Parents, teachers, and other caregivers for children are advised to be trained in choking first aid and cardiopulmonary resuscitation (CPR).
Anticipatory guidance from pediatricians
As a part of well-visits, pediatricians provide education to parents and their children regarding their development. Included in these visits is anticipatory guidance, which provides advice to parents and children as primary prevention of disease and illness including choking. For example, for a child that is 7–9 months old, children start to develop a pincer grasp allowing them to reach for objects. The ability to place these objects in their mouths significantly increases choking risk.
Example anticipatory guidance for children 7–9 months old:
Infants should avoid moving when feeding like riding in a car or stroller. Infants should be sitting upright and remain still.
Infants should be supervised when feeding including children younger than 3 years old
Infants will try to feed themselves. Avoid foods such as grapes, popcorn, carrots, nuts, and hard candies. Difficult to swallow foods like peanut butter and marshmallows should be given with caution.
Specifically, toys like marbles, balls, balloons should not be given including children younger than 3 years old.
Regulations for children in the United States
Several laws and commissions are aimed at preventing choking hazards in children. Formed in 1972, alongside the passing of the Consumer Products Safety Act, the U.S. Consumer Product Safety Commission (CPSC) regulates consumer projects that may pose "unreasonable risk" of injury to its users. The Consumer Products Safety Act allowed the CPSC to ban or place warnings on objects that could harm consumers. A Small Parts Test Fixture (SPTF) is a cylinder measuring 2.25 inches long by 1.25 inches wide determines whether a choking hazarding warning will be placed on the product. Furthermore, in 2008, the Consumer Product Safety Improvement requires any advertisements or websites regarding sale of a product to display choking hazard warnings.
According to a 1991 study, warning labels are an effective preventive measure against choking accidents. Items that contain many parts may include pieces that are considered choking hazards. Labels on children's toys may state recommended age ranges, and other items may carry a warning to parents to keep them out of the reach of children. Warning labels are clearly placed and written, usually including an obvious image.
While products are protected, there are currently no Food and Drug Administration (FDA) regulations regarding food choking hazards.
Prevention for other groups at risk
Some population groups have a higher risk of choking, such as the elderly, persons with disabilities (physically or mentally), people under the effects of alcohol or drugs, people who have taken medications that reduce the ability to salivate or react, patients with difficulties in swallowing (dysphagia), suicidal individuals, people with epilepsy, and people on the autism spectrum. They may require more assistance to feed themselves, and it may be necessary to supervise them while they eat.
As the ability to eat is deteriorating, some problematic foods (such as hot dogs, sausages, bananas, or grapes) can be split into slices and, additionally, lengthwise (being the cut into slices the main part for safety in many long-shaped foods). People who are unable to chew properly should not be served hard food. In cases where a person is unable to safely eat, food can be given by feeding syringes.
People who have taken any medication that reduces saliva should not eat solid food until their salivation is restored.
Notable cases
| Biology and health sciences | Types | Health |
166842 | https://en.wikipedia.org/wiki/Safari%20%28web%20browser%29 | Safari (web browser) | Safari is a web browser developed by Apple. It is built into several of Apple's operating systems, including macOS, iOS, iPadOS and visionOS, and uses Apple's open-source browser engine WebKit, which was derived from KHTML.
Safari was introduced in Mac OS X Panther in January 2003. It has been included with the iPhone since the first-generation iPhone in 2007. At that time, Safari was the fastest browser on the Mac. Between 2007 and 2012, Apple maintained a Windows version, but abandoned it due to low market share. In 2010, Safari 5 introduced a reader mode, extensions, and developer tools. Safari 11, released in 2017, added Intelligent Tracking Prevention, which uses artificial intelligence to block web tracking. Safari 13 added support for Apple Pay, and authentication with FIDO2 security keys. Its interface was redesigned in Safari 15.
History and development
Background
Netscape Navigator rapidly became the dominant Mac browser after its 1994 release, and eventually came bundled with Mac OS. In 1996, Microsoft released Internet Explorer for Mac (IE), and Apple released the Cyberdog internet suite, which included a web browser. In 1997, Apple shelved Cyberdog, and reached a five-year agreement with Microsoft to make IE the default browser on the Mac, starting with Mac OS 8.1. Netscape continued to be preinstalled on all Macintosh systems. Microsoft continued to update IE for Mac, which was ported to Mac OS X DP4 in May 2000.
Conception
Apple introduced the Safari web, on January 7, 2003. At the time, Steve Jobs called Safari, “a turbo browser for Mac OS X.” Apple created Safari for speed, calling it the fastest browser for the Mac. Jobs compared it to Internet Explorer, Netscape, and Chimera (later renamed Camino), showing that Safari was faster. The second reason that Apple created Safari was to innovate; Apple wanted to make the best browser ever. During development, several codenames were used including "Freedom", "iBrowse" and "Alexander" (a reference to conqueror Alexander the Great, an homage to the Konqueror web browser).
Safari 1
On January 7, 2003, at Macworld San Francisco, Apple CEO Steve Jobs announced Safari that was based on WebKit, the company's internal fork of the KHTML browser engine. Apple released the first beta version exclusively on Mac OS X the same day. After that date, several official and unofficial beta versions followed until version 1.0 was released on June 23, 2003. On Mac OS X v10.3, Safari was pre-installed as the system's default browser, rather than requiring a manual download, as was the case with the previous Mac OS X versions. Safari's predecessor, the Internet Explorer for Mac, was then included in 10.3 as an alternative.
Safari 2
In April 2005, Engineer Dave Hyatt fixed several bugs in Safari. His experimental beta passed the Acid2 rendering test on April 27, 2005, marking it the first browser to do so. Safari 2.0 which was released on April 29, 2005, was the sole browser Mac OS X 10.4 offered by default. Apple touted this version as it was capable of running a 1.8x speed boost compared to version 1.2.4 but it did not yet feature the Acid2 bug fixes. These major changes were initially unavailable for end-users unless they privately installed and compiled the WebKit source code or ran one of the nightly automated builds available at OpenDarwin. Version 2.0.2, released on October 31, 2005, finally included the Acid2 bug fixes.
In June 2005 in response to KHTML criticisms over the lack of access to change logs, Apple moved the development source code and bug tracking of WebCore and JavaScriptCore to OpenDarwin. They have also open-sourced WebKit. The source code is for non-renderer aspects of the browser such as its GUI elements and the remaining proprietary. The final stable version of Safari 2 and the last version released exclusively with Mac OS X, Safari 2.0.4, was updated on January 10, 2006, for Mac OS X. It was only available within Mac OS X Update 10.4.4, and it delivered fixes to layout and CPU usage issues among other improvements.
Safari 3
On January 9, 2007, at Macworld San Francisco, Jobs unveiled that Safari 3 was ported to the newly-introduced iPhone within iPhone OS (later called iOS). The mobile version was capable of displaying full, desktop-class websites. At WWDC 2007, Jobs announced Safari 3 for Mac OS X 10.5, Windows XP, and Windows Vista. He ran a benchmark based on the iBench browser test suite comparing the most popular Windows browsers to the browser, and claimed that Safari had the fastest performance. His claim was later examined by a third-party site called Web Performance over HTTP load times. They verified that Safari 3 was indeed the fastest browser on the Windows platform in terms of initial data loading over the Internet, though it was only negligibly faster than Internet Explorer 7 and Mozilla Firefox when it came to static content from the local cache.
The initial Safari 3 beta version for Windows, released on the same day as its announcement at WWDC 2007, contained several bugs and a zero day exploit that allowed remote code executions. The issues were then fixed by Apple three days later on June 14, 2007, in version 3.0.1. On June 22, 2007, Apple released Safari 3.0.2 to address some bugs, performance problems, and other security issues. Safari 3.0.2 for Windows handled some fonts that were missing in the browser but already installed on Windows computers such as Tahoma, Trebuchet MS, and others. The iPhone was previously released on June 29, 2007, with a version of Safari based on the same WebKit rendering engine as the desktop version but with a modified feature set better suited for a mobile device. The version number of Safari as reported in its user agent string is 3.0 was in line along with the contemporary desktop editions.
The first stable, non-beta version of Safari for Windows, Safari 3.1, was offered as a free download on March 18, 2008. In June 2008, Apple released version 3.1.2, which addressed a security vulnerability in the Windows version where visiting a malicious web site could force a download of executable files and execute them on the user's desktop. Safari 3.2, released on November 13, 2008, introduced anti-phishing features using Google Safe Browsing and Extended Validation Certificate support. The final version of Safari 3 was version 3.2.3, which was released on May 12, 2009, with security improvements.
Safari 4
Safari 4 was released on June 8, 2009. It was the first version that had completely passed the Acid3 rendering test, as well as the first version to support HTML5. It incorporated WebKit JavaScript engine SquirrelFish that significantly enhanced the browser's script interpretation performances by 29.9x. SquirrelFish was later evolved to SquirrelFish Extreme, later also marketed as Nitro, which had 63.6x faster performances. A public beta of Safari 4 was experimented in February 24, 2009.
Safari 4 relied on Cover Flow to run the History and Bookmarks, and it featured Speculative Loading that automatically pre-loaded document information that is required to visit a particular website. The top sites can be displayed up to 24 thumbnails based on the frequently visited sites in a startup. The desktop version of Safari 4 included a redesign similar to that of the iPhone. The update also commissioned many developer tool improvements including Web Inspectors, CSS element viewings, JavaScript debuggers and profilers, offline tables, database management, SQL support and resource graphs. In additions to CSS retouching effects, CSS canvas, and HTML5 content. It replaced the initial Mac OS X-like interface with native Windows themes on Windows using native font renderings.
Safari 4.0.1 was released for Mac on June 17, 2009, and fixed Faces bugs in iPhoto '09. Safari 4 in Mac OS X v10.6 "Snow Leopard" has built-in 64-bit support, which makes JavaScript load up to 50% faster. It also has native crash resistances that would maintain it intact if a plugin like Flash player crashes, though other tabs or windows would not be affected. Safari 4.0.4, the final version which was released on November 11, 2009, for both Mac and Windows, which further improved the JavaScript performances.
Safari 5
Safari 5 was released on June 7, 2010, and was the final version (version 5.1.7) for Windows. It featured a less distractive reader view, and had a 30x faster JavaScript performances. It incorporated numerous developer tool improvements including HTML5 interoperability, and the accessibility to secure extensions. The progress bar was re-added in this version as well. Safari 5.0.1 enabled the Extensions PrefPane by default, rather than requiring users to manually set it in the Debug menu.
Apple exclusively released Safari 4.1 concurrently with Safari 5 for Mac OS X Tiger. It included many features that were found in Safari 5, though it excluded the Safari Reader and Safari Extensions. Apple released Safari 5.1 for both Windows and Mac on July 20, 2011, for Mac OS X 10.7 Lion; it was faster than Safari 5.0, and included the new Reading List feature. The company simultaneously announced Safari 5.0.6 in late June 2010 for Mac OS X 10.5 Leopard, though the new functions were excluded from Leopard users.
Several HTML5 features were provided in Safari 5. It added supports for full-screen video, closed caption, geolocation, EventSource, and a now obsolete early variant of the WebSocket protocol. The fifth major version of Safari added supports for Full-text search, and a new search engine, Bing. Safari 5 supported Reader, which displays web pages in a continuous view, without advertisements. Safari 5 supported a smarter address field and DNS prefetching that automatically found links and looked up addresses on the web. New web pages loaded faster using Domain Name System (DNS) prefetching. The Windows version received an extra update on Graphic acceleration as well. The blue inline progress bar was returned to the address bar; in addition to the spinning bezel and loading indicator introduced in Safari 4. Top Sites view now had a button to switch to Full History Search. Other features included Extension Builder for developers of Safari Extensions. Other changes included an improved inspector. Safari 5 supports Extensions, add-ons that customize the web browsing experience. Extensions are built using web standards such as HTML5, CSS3, and JavaScript.
Safari 6
Safari 6.0 was previously referred to as Safari 5.2 until Apple changed the version number at WWDC 2012. The stable release of Safari 6 coincided with the release of OS X Mountain Lion on July 25, 2012, and was integrated within OS. As a result, it was no longer available for download from Apple's website or any other sources. Apple released Safari 6 via Software Update for users of OS X Lion. It was not released for OS X versions before Lion or for Windows. The company later quietly removed references and links for the Windows version of Safari 5. Microsoft had also removed Safari from its browser-choice page.
On June 11, 2012, Apple released a developer preview of Safari 6.0 with a feature called iCloud Tabs, which syncs with open tabs on any iOS or other OS X device that ran the latest software. It updated new privacy features, including an "Ask websites not to track me" preference and the ability for websites to send OS X 10.8 Mountain Lion users notifications, though it removed RSS support. Safari 6 had the Share Sheets capability in OS X Mountain Lion. The Share Sheet options were: Add to Reading List, Add Bookmark, Email this Page, Message, Twitter, and Facebook. Tabs with full-page previews were added, too. The sixth major version of Safari, it added options to allow pages to be shared with other users via email, Messages, Twitter, and Facebook, as well as making some minor performance improvements. It added supports for in CSS. Additionally, various features were removed including Activity Window, a separate Download Window, direct support for RSS feeds in the URL field, and bookmarks. The separate search field and the address bar were also no longer available as a toolbar configuration option. Instead, it was replaced by the smart search field, a combination of the address bar and the search field.
Safari 7
Safari 7 was announced at WWDC 2013, and it brought a number of JavaScript performance improvements. It made uses of Top Site and Sidebar, Shared Links, and Power Saver which paused unused plugins. Safari 7 for OS X Mavericks and Safari 6.1 for Lion and Mountain Lion were all released along with OS X Mavericks in the special event on October 22, 2013.
Safari 8
Safari 8 was announced at WWDC 2014 and was released for OS X Yosemite. It included the JavaScript API WebGL, stronger privacy management, improved iCloud integration, and a redesigned interface. It was also faster and more efficient, with additional developer features including JavaScript Promises, CSS Shapes & Composting mark up, IndexedDB, Encrypted Media Extensions, and SPDY protocol.
Safari 9
Safari 9 was announced in WWDC 2015 and was shipped with OS X El Capitan. New features included audio muting, more options for Safari Reader, and improved autofill. It was not fully available for the previous OS X Yosemite.
Safari 10
Safari 10 was shipped with macOS Sierra and released for OS X Yosemite and OS X El Capitan on September 20, 2016. It had a redesigned Bookmark and History views, and double-clicking will centralized focus on a particular folder. The update redirected Safari extensions to be saved directly to Pocket and Dic Go. Software improvements included Autofill quality from the Contrast card and Web Inspector Timelines Tab, in-line sub-headlines, bylines, and publish dates. This version tracks and re-applies zoomed level to websites, and legacy plug-ins were disabled by default in favor of HTML5 versions of websites. Recently closed tabs can be reopened via the History menu, or by holding the "+" button in the tab bar, and using Shift-Command-T. When a link opens in a new tab; it is now possible to hit the back button or swipe to close it and go back to the original tab. Debugging is now supported on the Web Inspector. Safari 10 also includes several security updates, including fixes for six WebKit vulnerabilities and issues related to Reader and Tabs. The first version of Safari 10 was released on September 20, 2016, and the last version (10.1.2) was released on July 19, 2017.
Safari 11
Safari 11 was released on September 19, 2017 for OS X El Capitan and macOS Sierra, ahead of macOS High Sierra's release. It was included with High Sierra. Safari 11 included several new features such as Intelligent Tracking Prevention which aimed to prevent cross-site tracking by placing limitations on cookies and other website data. Intelligent Tracking Prevention allowed first-party cookies to continue track the browser history, though with time limits. For example, first-party cookies from ad-tech companies such as Google/Alphabet Inc., were set to expire in 24-hours after the visit.
Safari 12
Safari 12 was released for macOS Mojave on September 24, 2018. It was also available to macOS Sierra and macOS High Sierra on September 17, 2018. Safari 12 included several new features such as Icons in tabs, Automatic Strong Passwords, and Intelligent Tracking Prevention 2.0. Safari version 12.0.1 was released on October 30, 2018, within macOS Mojave 10.14.1, and Safari 12.0.2 was released on December 5, 2018, under macOS 10.14.2. Support for developer-signed classic Safari Extensions has been dropped. This version would also be the last that supported the official Extensions Gallery. Apple also encouraged extension authors to switch to Safari App Extensions, which triggered negative feedback from the community.
Safari 13
Safari 13 was announced at WWDC 2019 on June 3, 2019. Safari 13 included several new features such as prompting users to change weak passwords, FIDO2 USB security key authentication support, Sign in with Apple support, Apple Pay on the Web support and increased speed and security. Safari 13 was released on September 20, 2019, on macOS Mojave and macOS High Sierra, and later shipped with macOS Catalina.
Safari 14
In June 2020 it was announced that macOS Big Sur will include Safari 14. According to Apple, Safari 14 is more than 50% faster than Google Chrome. Safari 14 introduced new privacy features, including Privacy Report, which shows blocked content and privacy information on web pages. Users will also receive a monthly report on trackers that Safari has blocked. Extensions can also be enabled or disabled on a site-by-site basis. Safari 14 introduced partial support for the WebExtension API used in Google Chrome, Microsoft Edge, Firefox, and Opera, making it easier for developers to port their extensions from those web browsers to Safari. Support for Adobe Flash Player will also be dropped from Safari, 3 months ahead of its end-of-life. A built-in translation service allows translation of a page to another language. Safari 14 was released as a standalone update to macOS Catalina and Mojave users on September 16, 2020. It added Ecosia as a supported search engine.
Safari 15
Safari 15 was released for iOS 15, iPadOS 15, macOS Big Sur and macOS Catalina on September 20, 2021, and later shipped with macOS Monterey. It featured a redesigned interface and tab groups that blended better into the background. There were also a new home page and extension supports on the iOS and iPadOS editions. Starting with this update, Safari versions would support iOS and iPadOS, ending the iOS version of separate updates.
Safari 16
Safari 16 was released for iOS 16, macOS Monterey and macOS Big Sur on September 12, 2022, and later shipped with macOS Ventura and iPadOS 16. Safari 16 added support for non-animated AVIF and contains several bug fixes and feature polishing. Safari 16 also includes shared tab groups, vertical tab support, website settings synchronization between devices connected to a same iCloud account, the ability to add backgrounds for a start page, new languages for built-in translation, built-in image translation, and new options to edit strong passwords. iOS 16.4 also introduced Web Push notifications.
Safari 17
Safari 17 was released in September 2023 with iOS 17, iPadOS 17 and macOS Sonoma. It includes a feature named "Profiles", which allows users to separate their browsing sessions for different use cases. Every profile has a special favorites bar, navigation history, extensions, tab groups, and cookies. Just like iOS 16.4, Safari 17 introduces web apps that can be added to the dock. Cookies are copied into web apps so that users stay logged in the web app if they already are in Safari. Safari can also now read pages with a new option in the navigation bar menu.
New privacy features include locked private browsing when not in use, tracking-free URLs, private relay based on the country’s location and time instead of general position.
Safari has also been adapted to Vision Pro with a new spatial UI, and Apple has redesigned the Develop menu for web developers.
Safari 17 added AV1 hardware decoding support for devices with hardware decoding support.
Safari 18
Safari 18 was released in September 2024 with iOS 18, iPadOS 18 and macOS Sequoia, and for the first time, visionOS 2. Like Safari 15, it redesigns the interface, but not limited to the start page and reader mode (which is now called Reader).
A new feature, AI-powered "Highlights" have been introduced, which will automatically detect relevant information on a page and highlight it as you browse.
Other new features include a redesigned unified menu which is now on all versions of the browser, previously it was exclusive to iOS and iPadOS along with the compact mode on macOS, and faster loading times.
iOS versions
Starting iOS 15 and iPadOS 15, Safari would now ship the same features as the macOS version, which also included the name of the updates, ending the separate iOS version.
Safari Technology Preview
Safari Technology Preview was first released alongside OS X El Capitan 10.11.4. Safari Technology Preview releases include the latest version of WebKit, which included Web technologies in the future stable releases of Safari so that developers and users can install the Technology Preview release on a Mac, test those features, and provide feedback.
Safari Developer Program
The Safari Developer Program was a program dedicated to in-browser extension and HTML developers.
It allowed members to write and distribute extensions for Safari through the Safari Extensions Gallery. It was initially free until it was incorporated into the Apple Developer Program in WWDC 2015, which costs $99 a year. The charges prompted frustrations from developers. Within OS X El Capitan, Apple implemented the Secure Extension Distribution to further improve its security, and it automatically updated all extensions within the Safari Extensions Gallery.
Version compatibility
Features
Until Safari 6.0, it included a built-in web feed aggregator that supported the RSS and Atom standards. Current features included Private Browsing (a mode in which the browser retains no record of information about the user's web activity), the ability to archive web content in WebArchive format, the ability to email complete web pages directly from a browser menu, the ability to search bookmarks, and the ability to share tabs between all Mac and iOS devices running appropriate versions of software via an iCloud account.
Web compatibility
In Safari's early years, it pioneered several HTML5 features that are now standard, such as the Canvas API.
In 2015, Safari was criticized for failing to keep pace with some modern web technologies.
Intelligent Tracking Prevention
In September 2017, Apple announced that it would use artificial intelligence (AI) to reduce the ability of advertisers to track Safari users as they browse the web. Cookies used for tracking will be allowed for 24 hours, then disabled, unless the AI system judges that the user wants to keep the cookie. Major advertising groups objected, saying it will reduce the free services supported by advertising, while other experts praised the change.
Plugin support
Apple used a remotely updated plug-in blacklist to prevent potentially dangerous or vulnerable plugins from running on Safari. Initially, Flash and Java contents were blocked on some early versions of Safari. Since Safari 12, support for NPAPI plugins (except for Flash) has been completely dropped. Safari 14 finally dropped support for Adobe Flash Player.
WebExtension support
Beginning in 2018, Apple made technical changes to Safari's content blocking functionality which prompted backlash from users and developers of ad blocking extensions, who said the changes made it impossible to offer a similar level of user protection found in other browsers. Internally, the update limited the number of blocking rules which could be applied by third-party extensions, preventing the full implementation of community-developed blocklists. In response, several developers of popular ad and tracking blockers announced their products were being discontinued, as they were now incompatible with Safari's newly limited content blocking features. Beginning with Safari 13, popular extensions such as uBlock Origin no longer work with Safari.
iCloud sync
Safari can sync bookmarks, history, reading list, and tabs through iCloud. This happens by default if a user's Mac, iPhone or iPad is logged in to iCloud, but syncing can be disabled in the Settings app (on iOS and iPadOS) or System Settings (on Mac).
iCloud Tabs lets users see a list of their other devices' open tabs that have not been added to a tab group. On iOS and iPadOS, these iCloud Tabs are shown below the grid of open tabs. On the Mac, they are shown at the bottom of the Tab Overview, or in an optional iCloud Tabs toolbar item.
Tab Groups
Safari 15 added tab groups. These tab groups, and the tabs they contain, can be synced across devices; when a tab is opened in a tab group on one device, it is added to that tab group on all devices, without needing to manually open it through iCloud Tabs. macOS Ventura added Shared Tab Groups, which can be shared through iMessage. New tabs and closed tabs will sync for all participants, and a small thumbnail with users' profile pictures will be visible on the tab they are currently viewing.
Handoff
Safari supports the Handoff feature, which allows users to continue where they left off on another device.
Sidebar
The Safari sidebar was introduced in Safari 8 as a way to access Bookmarks, Reading List, and Shared Tabs. The sidebar got its biggest update in Safari 16, when it added support for vertical tabs. This allows users to see their tabs arranged vertically in addition to the horizontal tab view in the top Toolbar.
Visual Look Up
This feature allows users to quickly learn more about landmarks, works of art, and more by selecting an image or a photo. Users can also easily lift the subject of an image from Safari, remove its background, and paste it into other apps like Messages and | Technology | Browsers | null |
166868 | https://en.wikipedia.org/wiki/Exposure%20%28photography%29 | Exposure (photography) | In photography, exposure is the amount of light per unit area reaching a frame of photographic film or the surface of an electronic image sensor. It is determined by shutter speed, lens f-number, and scene luminance. Exposure is measured in units of lux-seconds (symbol lxs), and can be computed from exposure value (EV) and scene luminance in a specified region.
An "exposure" is a single shutter cycle. For example, a long exposure refers to a single, long shutter cycle to gather enough dim light, whereas a multiple exposure involves a series of shutter cycles, effectively layering a series of photographs in one image. The accumulated photometric exposure (Hv) is the same so long as the total exposure time is the same.
Definitions
Radiant exposure
Radiant exposure of a surface,
denoted He ("e" for "energetic", to avoid confusion with photometric quantities) and measured in , is given by
where
Ee is the irradiance of the surface, measured in ;
t is the exposure duration, measured in s.
Luminous exposure
Luminous exposure of a surface, denoted Hv ("v" for "visual", to avoid confusion with radiometric quantities) and measured in , is given by
where
Ev is the illuminance of the surface, measured in lx;
t is the exposure duration, measured in s.
If the measurement is adjusted to account only for light that reacts with the photo-sensitive surface, that is, weighted by the appropriate spectral sensitivity, the exposure is still measured in radiometric units (joules per square meter), rather than photometric units (weighted by the nominal sensitivity of the human eye). Only in this appropriately weighted case does the H measure the effective amount of light falling on the film, such that the characteristic curve will be correct independent of the spectrum of the light.
Many photographic materials are also sensitive to "invisible" light, which can be a nuisance (see UV filter and IR filter), or a benefit (see infrared photography and full-spectrum photography). The use of radiometric units is appropriate to characterize such sensitivity to invisible light.
In sensitometric data, such as characteristic curves, the log exposure is conventionally expressed as log10(H). Photographers more familiar with base-2 logarithmic scales (such as exposure values) can convert using .
Optimum exposure
"Correct" exposure may be defined as an exposure that achieves the effect the photographer intended.
A more technical approach recognises that a photographic film (or sensor) has a physically limited useful exposure range, sometimes called its dynamic range. If, for any part of the photograph, the actual exposure is outside this range, the film cannot record it accurately. In a very simple model, for example, out-of-range values would be recorded as "black" (underexposed) or "white" (overexposed) rather than the precisely graduated shades of colour and tone required to describe "detail". Therefore, the purpose of exposure adjustment (and/or lighting adjustment) is to control the physical amount of light from the subject that is allowed to fall on the film, so that 'significant' areas of shadow and highlight detail do not exceed the film's useful exposure range. This ensures that no 'significant' information is lost during capture.
The photographer may carefully overexpose or underexpose the photograph to eliminate "insignificant" or "unwanted" detail; to make, for example, a white altar cloth appear immaculately clean, or to emulate the heavy, pitiless shadows of film noir. However, it is technically much easier to discard recorded information during post processing than to try to 're-create' unrecorded information.
In a scene with strong or harsh lighting, the ratio between highlight and shadow luminance values may well be larger than the ratio between the film's maximum and minimum useful exposure values. In this case, adjusting the camera's exposure settings (which only applies changes to the whole image, not selectively to parts of the image) only allows the photographer to choose between underexposed shadows or overexposed highlights; it cannot bring both into the useful exposure range at the same time. Methods for dealing with this situation include: using what is called fill lighting to increase the illumination in shadow areas; using a graduated neutral-density filter, flag, scrim, or gobo to reduce the illumination falling upon areas deemed too bright; or varying the exposure between multiple, otherwise identical, photographs (exposure bracketing) and then combining them afterwards in an HDRI process.
Overexposure and underexposure
A photograph may be described as overexposed when it has a loss of highlight detail, that is, when important bright parts of an image are "washed out" or effectively all white, known as "blown-out highlights" or "clipped whites". A photograph may be described as underexposed when it has a loss of shadow detail, that is, when important dark areas are "muddy" or indistinguishable from black, known as "blocked-up shadows" (or sometimes "crushed shadows", "crushed blacks", or "clipped blacks", especially in video). As the adjacent image shows, these terms are technical ones rather than artistic judgments; an overexposed or underexposed image may be "correct" in the sense that it provides the effect that the photographer intended. Intentionally over- or underexposing (relative to a standard or the camera's automatic exposure) is casually referred to as "exposing to the right" or "exposing to the left" respectively, as these shift the histogram of the image to the right or left.
Exposure settings
Manual exposure
In manual mode, the photographer adjusts the lens aperture and/or shutter speed to achieve the desired exposure. Many photographers choose to control aperture and shutter independently because opening up the aperture increases exposure, but also decreases the depth of field, and a slower shutter increases exposure but also increases the opportunity for motion blur.
"Manual" exposure calculations may be based on some method of light metering with a working knowledge of exposure values, the APEX system and/or the Zone System.
Automatic exposure
A camera in automatic exposure or autoexposure (usually initialized as AE) mode automatically calculates and adjusts exposure settings to match (as closely as possible) the subject's mid-tone to the mid-tone of the photograph. For most cameras, this means using an on-board TTL exposure meter.
Aperture priority (commonly abbreviated as A, or Av for aperture value) mode gives the photographer manual control of the aperture, whilst the camera automatically adjusts the shutter speed to achieve the exposure specified by the TTL meter. Shutter priority (often abbreviated as S, or Tv for time value) mode gives manual shutter control, with automatic aperture compensation. In each case, the actual exposure level is still determined by the camera's exposure meter.
Exposure compensation
The purpose of an exposure meter is to estimate the subject's mid-tone luminance and indicate the camera exposure settings required to record this as a mid-tone. In order to do this it has to make a number of assumptions which, under certain circumstances, will be wrong. If the exposure setting indicated by an exposure meter is taken as the "reference" exposure, the photographer may wish to deliberately overexpose or underexpose in order to compensate for known or anticipated metering inaccuracies.
Cameras with any kind of internal exposure meter usually feature an exposure compensation setting which is intended to allow the photographer to simply offset the exposure level from the internal meter's estimate of appropriate exposure. Frequently calibrated in stops, also known as EV units, a "+1" exposure compensation setting indicates one stop more (twice as much) exposure and "–1" means one stop less (half as much) exposure.
Exposure compensation is particularly useful in combination with auto-exposure mode, as it allows the photographer to bias the exposure level without resorting to full manual exposure and losing the flexibility of auto exposure. On low-end video camcorders, exposure compensation may be the only manual exposure control available.
Exposure control
An appropriate exposure for a photograph is determined by the sensitivity of the medium used. For photographic film, sensitivity is referred to as film speed and is measured on a scale published by the International Organization for Standardization (ISO). Faster film, that is, film with a higher ISO rating, requires less exposure to make a readable image. Digital cameras usually have variable ISO settings that provide additional flexibility. Exposure is a combination of the length of time and the illuminance at the photosensitive material. Exposure time is controlled in a camera by shutter speed, and the illuminance depends on the lens aperture and the scene luminance. Slower shutter speeds (exposing the medium for a longer period of time), greater lens apertures (admitting more light), and higher-luminance scenes produce greater exposures.
An approximately correct exposure will be obtained on a sunny day using ISO 100 film, an aperture of and a shutter speed of 1/100 of a second. This is called the sunny 16 rule: at an aperture of on a sunny day, a suitable shutter speed will be one over the film speed (or closest equivalent).
A scene can be exposed in many ways, depending on the desired effect a photographer wishes to convey.
Reciprocity
An important principle of exposure is reciprocity. If one exposes the film or sensor for a longer period, a reciprocally smaller aperture is required to reduce the amount of light hitting the film to obtain the same exposure. For example, the photographer may prefer to make his sunny-16 shot at an aperture of (to obtain a shallow depth of field). As is 3 stops "faster" than , with each stop meaning double the amount of light, a new shutter speed of (1/125)/(2·2·2) = 1/1000 s is needed. Once the photographer has determined the exposure, aperture stops can be traded for halvings or doublings of speed, within limits.
The true characteristic of most photographic emulsions is not actually linear (see sensitometry), but it is close enough over the exposure range of about 1 second to 1/1000 of a second. Outside of this range, it becomes necessary to increase the exposure from the calculated value to account for this characteristic of the emulsion. This characteristic is known as reciprocity failure. The film manufacturer's data sheets should be consulted to arrive at the correction required, as different emulsions have different characteristics.
Digital camera image sensors can also be subject to a form of reciprocity failure.
Determining exposure
The Zone System is another method of determining exposure and development combinations to achieve a greater tonality range over conventional methods by varying the contrast of the film to fit the print contrast capability. Digital cameras can achieve similar results (high dynamic range) by combining several different exposures (varying shutter or diaphragm) made in quick succession.
Today, most cameras automatically determine the correct exposure at the time of taking a photograph by using a built-in light meter, or multiple point meters interpreted by a built-in computer, see metering mode.
Negative and print film tends to bias for exposing for the shadow areas (film dislikes being starved of light), with digital favouring exposure for highlights. See latitude below.
Latitude
Latitude is the degree by which one can over, or under expose an image, and still recover an acceptable level of quality from an exposure. Typically negative film has a better ability to record a range of brightness than slide/transparency film or digital. Digital should be considered to be the reverse of print film, with a good latitude in the shadow range, and a narrow one in the highlight area; in contrast to film's large highlight latitude, and narrow shadow latitude. Slide/Transparency film has a narrow latitude in both highlight and shadow areas, requiring greater exposure accuracy.
Negative film's latitude increases somewhat with high ISO material, in contrast digital tends to narrow on latitude with high ISO settings.
Highlights
Areas of a photo where information is lost due to extreme brightness are described as having "blown-out highlights" or "flared highlights".
In digital images this information loss is often irreversible, though small problems can be made less noticeable using photo manipulation software. Recording to RAW format can correct this problem to some degree, as can using a digital camera with a better sensor.
Film can often have areas of extreme overexposure but still record detail in those areas. This information is usually somewhat recoverable when printing or transferring to digital.
A loss of highlights in a photograph is usually undesirable, but in some cases can be considered to "enhance" appeal. Examples include black and white photography and portraits with an out-of-focus background.
Blacks
Areas of a photo where information is lost due to extreme darkness are described as "crushed blacks". Digital capture tends to be more tolerant of underexposure, allowing better recovery of shadow detail, than same-ISO negative print film.
Crushed blacks cause loss of detail, but can be used for artistic effect.
| Technology | Photography | null |
166931 | https://en.wikipedia.org/wiki/River%20delta | River delta | A river delta is a triangular landform created by the deposition of the sediments that are carried by the waters of a river, where the river merges with a body of slow-moving water or with a body of stagnant water. The creation of a river delta occurs at the river mouth, where the river merges into an ocean, a sea, or an estuary, into a lake, a reservoir, or (more rarely) into another river that cannot carry away the sediment supplied by the feeding river. Etymologically, the term river delta derives from the triangular shape (Δ) of the uppercase Greek letter delta. In hydrology, the dimensions of a river delta are determined by the balance between the watershed processes that supply sediment and the watershed processes that redistribute, sequester, and export the supplied sediment into the receiving basin.
River deltas are important in human civilization, as they are major agricultural production centers and population centers. They can provide coastline defence and can impact drinking water supply. They are also ecologically important, with different species' assemblages depending on their landscape position. On geologic timescales, they are also important carbon sinks.
Etymology
A river delta is so named because the shape of the Nile Delta approximates the triangular uppercase Greek letter delta. The triangular shape of the Nile Delta was known to audiences of classical Athenian drama; the tragedy Prometheus Bound by Aeschylus refers to it as the "triangular Nilotic land", though not as a "delta". Herodotus's description of Egypt in his Histories mentions the Delta fourteen times, as "the Delta, as it is called by the Ionians", including describing the outflow of silt into the sea and the convexly curved seaward side of the triangle. Despite making comparisons to other river systems deltas, Herodotus did not describe them as "deltas". The Greek historian Polybius likened the land between the Rhône and Isère rivers to the Nile Delta, referring to both as islands, but did not apply the word delta. According to the Greek geographer Strabo, the Cynic philosopher Onesicritus of Astypalaea, who accompanied Alexander the Great's conquests in India, reported that Patalene (the delta of the Indus River) was "a delta" (). The Roman author Arrian's Indica states that "the delta of the land of the Indians is made by the Indus river no less than is the case with that of Egypt".
As a generic term for the landform at the mouth of the river, the word delta is first attested in the English-speaking world in the late 18th century, in the work of Edward Gibbon.
Formation
River deltas form when a river carrying sediment reaches a body of water, such as a lake, ocean, or a reservoir. When the flow enters the standing water, it is no longer confined to its channel and expands in width. This flow expansion results in a decrease in the flow velocity, which diminishes the ability of the flow to transport sediment. As a result, sediment drops out of the flow and is deposited as alluvium, which builds up to form the river delta. Over time, this single channel builds a deltaic lobe (such as the bird's-foot of the Mississippi or Ural river deltas), pushing its mouth into the standing water. As the deltaic lobe advances, the gradient of the river channel becomes lower because the river channel is longer but has the same change in elevation (see slope).
As the gradient of the river channel decreases, the amount of shear stress on the bed decreases, which results in the deposition of sediment within the channel and a rise in the channel bed relative to the floodplain. This destabilizes the river channel. If the river breaches its natural levees (such as during a flood), it spills out into a new course with a shorter route to the ocean, thereby obtaining a steeper, more stable gradient. Typically, when the river switches channels in this manner, some of its flow remains in the abandoned channel. Repeated channel-switching events build up a mature delta with a distributary network.
Another way these distributary networks form is from the deposition of mouth bars (mid-channel sand and/or gravel bars at the mouth of a river). When this mid-channel bar is deposited at the mouth of a river, the flow is routed around it. This results in additional deposition on the upstream end of the mouth bar, which splits the river into two distributary channels. A good example of the result of this process is the Wax Lake Delta.
In both of these cases, depositional processes force redistribution of deposition from areas of high deposition to areas of low deposition. This results in the smoothing of the planform (or map-view) shape of the delta as the channels move across its surface and deposit sediment. Because the sediment is laid down in this fashion, the shape of these deltas approximates a fan. The more often the flow changes course, the shape develops closer to an ideal fan because more rapid changes in channel position result in a more uniform deposition of sediment on the delta front. The Mississippi and Ural River deltas, with their bird's feet, are examples of rivers that do not avulse often enough to form a symmetrical fan shape. Alluvial fan deltas, as seen by their name, avulse frequently and more closely approximate an ideal fan shape.
Most large river deltas discharge to intra-cratonic basins on the trailing edges of passive margins due to the majority of large rivers such as the Mississippi, Nile, Amazon, Ganges, Indus, Yangtze, and Yellow River discharging along passive continental margins. This phenomenon is due mainly to three factors: topography, basin area, and basin elevation. Topography along passive margins tend to be more gradual and widespread over a greater area enabling sediment to pile up and accumulate over time to form large river deltas. Topography along active margins tends to be steeper and less widespread, which results in sediments not having the ability to pile up and accumulate due to the sediment traveling into a steep subduction trench rather than a shallow continental shelf.
There are many other lesser factors that could explain why the majority of river deltas form along passive margins rather than active margins. Along active margins, orogenic sequences cause tectonic activity to form over-steepened slopes, brecciated rocks, and volcanic activity resulting in delta formation to exist closer to the sediment source. When sediment does not travel far from the source, sediments that build up are coarser grained and more loosely consolidated, therefore making delta formation more difficult. Tectonic activity on active margins causes the formation of river deltas to form closer to the sediment source which may affect channel avulsion, delta lobe switching, and auto cyclicity. Active margin river deltas tend to be much smaller and less abundant but may transport similar amounts of sediment. However, the sediment is never piled up in thick sequences due to the sediment traveling and depositing in deep subduction trenches.
At the mouth of a river, the change in flow conditions can cause the river to drop any sediment it is carrying. This sediment deposition can generate a variety of landforms, such as deltas, sand bars, spits, and tie channels. Landforms at the river mouth drastically alter the geomorphology and ecosystem.
Types
Deltas are typically classified according to the main control on deposition, which is a combination of river, wave, and tidal processes, depending on the strength of each. The other two factors that play a major role are landscape position and the grain size distribution of the source sediment entering the delta from the river.
Fluvial-dominated deltas
Fluvial-dominated deltas are found in areas of low tidal range and low wave energy. Where the river water is nearly equal in density to the basin water, the delta is characterized by homopycnal flow, in which the river water rapidly mixes with basin water and abruptly dumps most of its sediment load. Where the river water has a higher density than basin water, typically from a heavy load of sediment, the delta is characterized by hyperpycnal flow in which the river water hugs the basin bottom as a density current that deposits its sediments as turbidites. When the river water is less dense than the basin water, as is typical of river deltas on an ocean coastline, the delta is characterized by hypopycnal flow in which the river water is slow to mix with the denser basin water and spreads out as a surface fan. This allows fine sediments to be carried a considerable distance before settling out of suspension. Beds in a hypocynal delta dip at a very shallow angle, around 1 degree.
Fluvial-dominated deltas are further distinguished by the relative importance of the inertia of rapidly flowing water, the importance of turbulent bed friction beyond the river mouth, and buoyancy. Outflow dominated by inertia tends to form Gilbert-type deltas. Outflow dominated by turbulent friction is prone to channel bifurcation, while buoyancy-dominated outflow produces long distributaries with narrow subaqueous natural levees and few channel bifurcations.
The modern Mississippi River delta is a good example of a fluvial-dominated delta whose outflow is buoyancy-dominated. Channel abandonment has been frequent, with seven distinct channels active over the last 5000 years. Other fluvial-dominated deltas include the Mackenzie delta and the Alta delta.
Gilbert deltas
A Gilbert delta (named after Grove Karl Gilbert) is a type of fluvial-dominated delta formed from coarse sediments, as opposed to gently sloping muddy deltas such as that of the Mississippi. For example, a mountain river depositing sediment into a freshwater lake would form this kind of delta.
It is commonly a result of homopycnal flow. Such deltas are characterized by a tripartite structure of topset, foreset, and bottomset beds. River water entering the lake rapidly deposits its coarser sediments on the submerged face of the delta, forming steeping dipping foreset beds. The finer sediments are deposited on the lake bottom beyond this steep slope as more gently dipping bottomset beds. Behind the delta front, braided channels deposit the gently dipping beds of the topset on the delta plain.
While some authors describe both lacustrine and marine locations of Gilbert deltas, others note that their formation is more characteristic of the freshwater lakes, where it is easier for the river water to mix with the lakewater faster (as opposed to the case of a river falling into the sea or a salt lake, where less dense fresh water brought by the river stays on top longer). Gilbert himself first described this type of delta on Lake Bonneville in 1885. Elsewhere, similar structures occur, for example, at the mouths of several creeks that flow into Okanagan Lake in British Columbia and form prominent peninsulas at Naramata, Summerland, and Peachland.
Wave-dominated deltas
In wave-dominated deltas, wave-driven sediment transport controls the shape of the delta, and much of the sediment emanating from the river mouth is deflected along the coastline. The relationship between waves and river deltas is quite variable and largely influenced by the deepwater wave regimes of the receiving basin. With a high wave energy near shore and a steeper slope offshore, waves will make river deltas smoother. Waves can also be responsible for carrying sediments away from the river delta, causing the delta to retreat. For deltas that form further upriver in an estuary, there are complex yet quantifiable linkages between winds, tides, river discharge, and delta water levels.
Tide-dominated deltas
Erosion is also an important control in tide-dominated deltas, such as the Ganges Delta, which may be mainly submarine, with prominent sandbars and ridges. This tends to produce a "dendritic" structure. Tidal deltas behave differently from river-dominated and wave-dominated deltas, which tend to have a few main distributaries. Once a wave-dominated or river-dominated distributary silts up, it is abandoned, and a new channel forms elsewhere. In a tidal delta, new distributaries are formed during times when there is a lot of water around – such as floods or storm surges. These distributaries slowly silt up at a more or less constant rate until they fizzle out.
Tidal freshwater deltas
A tidal freshwater delta is a sedimentary deposit formed at the boundary between an upland stream and an estuary, in the region known as the "subestuary". Drowned coastal river valleys that were inundated by rising sea levels during the late Pleistocene and subsequent Holocene tend to have dendritic estuaries with many feeder tributaries. Each tributary mimics this salinity gradient from its brackish junction with the mainstem estuary up to the fresh stream feeding the head of tidal propagation. As a result, the tributaries are considered to be "subestuaries". The origin and evolution of a tidal freshwater delta involves processes that are typical of all deltas as well as processes that are unique to the tidal freshwater setting. The combination of processes that create a tidal freshwater delta result in a distinct morphology and unique environmental characteristics. Many tidal freshwater deltas that exist today are directly caused by the onset of or changes in historical land use, especially deforestation, intensive agriculture, and urbanization. These ideas are well illustrated by the many tidal freshwater deltas prograding into Chesapeake Bay along the east coastline of the United States. Research has demonstrated that the accumulating sediments in this estuary derive from post-European settlement deforestation, agriculture, and urban development.
Estuaries
Other rivers, particularly those on coasts with significant tidal range, do not form a delta but enter into the sea in the form of an estuary. Notable examples include the Gulf of Saint Lawrence and the Tagus estuary.
Inland deltas
In rare cases, the river delta is located inside a large valley and is called an inverted river delta. Sometimes a river divides into multiple branches in an inland area, only to rejoin and continue to the sea. Such an area is called an inland delta, and often occurs on former lake beds. The term was first coined by Alexander von Humboldt for the middle reaches of the Orinoco River, which he visited in 1800. Other prominent examples include the Inner Niger Delta, Peace–Athabasca Delta, the Sacramento–San Joaquin River Delta, and the Sistan delta of Iran. The Danube has one in the valley on the Slovak–Hungarian border between Bratislava and Iža.
In some cases, a river flowing into a flat arid area splits into channels that evaporate as it progresses into the desert. The Okavango Delta in Botswana is one example. See endorheic basin.
Mega deltas
The generic term mega delta can be used to describe very large Asian river deltas, such as the Yangtze, Pearl, Red, Mekong, Irrawaddy, Ganges-Brahmaputra, and Indus.
Sedimentary structure
The formation of a delta is complicated, multiple, and cross-cutting over time, but in a simple delta three main types of bedding may be distinguished: the bottomset beds, foreset/frontset beds, and topset beds. This three-part structure may be seen on small scale by crossbedding.
The bottomset beds are created from the lightest suspended particles that settle farthest away from the active delta front, as the river flow diminishes into the standing body of water and loses energy. This suspended load is deposited by sediment gravity flow, creating a turbidite. These beds are laid down in horizontal layers and consist of the finest grain sizes.
The foreset beds in turn are deposited in inclined layers over the bottomset beds as the active lobe advances. Foreset beds form the greater part of the bulk of a delta, (and also occur on the lee side of sand dunes). The sediment particles within foreset beds consist of larger and more variable sizes, and constitute the bed load that the river moves downstream by rolling and bouncing along the channel bottom. When the bed load reaches the edge of the delta front, it rolls over the edge, and is deposited in steeply dipping layers over the top of the existing bottomset beds. Underwater, the slope of the outermost edge of the delta is created at the angle of repose of these sediments. As the foresets accumulate and advance, subaqueous landslides occur and readjust overall slope stability. The foreset slope, thus created and maintained, extends the delta lobe outward. In cross section, foresets typically lie in angled, parallel bands, and indicate stages and seasonal variations during the creation of the delta.
The topset beds of an advancing delta are deposited in turn over the previously laid foresets, truncating or covering them. Topsets are nearly horizontal layers of smaller-sized sediment deposited on the top of the delta and form an extension of the landward alluvial plain. As the river channels meander laterally across the top of the delta, the river is lengthened and its gradient is reduced, causing the suspended load to settle out in nearly horizontal beds over the delta's top. Topset beds are subdivided into two regions: the upper delta plain and the lower delta plain. The upper delta plain is unaffected by the tide, while the boundary with the lower delta plain is defined by the upper limit of tidal influence.
Existential threats to deltas
Human activities in both deltas and the river basins upstream of deltas can radically alter delta environments. Upstream land use change such as anti-erosion agricultural practices and hydrological engineering such as dam construction in the basins feeding deltas have reduced river sediment delivery to many deltas in recent decades. This change means that there is less sediment available to maintain delta landforms, and compensate for erosion and sea level rise, causing some deltas to start losing land. Declines in river sediment delivery are projected to continue in the coming decades.
The extensive anthropogenic activities in deltas also interfere with geomorphological and ecological delta processes. People living on deltas often construct flood defences which prevent sedimentation from floods on deltas, and therefore means that sediment deposition can not compensate for subsidence and erosion. In addition to interference with delta aggradation, pumping of groundwater, oil, and gas, and constructing infrastructure all accelerate subsidence, increasing relative sea level rise. Anthropogenic activities can also destabilise river channels through sand mining, and cause saltwater intrusion. There are small-scale efforts to correct these issues, improve delta environments and increase environmental sustainability through sedimentation enhancing strategies.
While nearly all deltas have been impacted to some degree by humans, the Nile Delta and Colorado River Delta are some of the most extreme examples of the devastation caused to deltas by damming and diversion of water.
Historical data documents show that during the Roman Empire and Little Ice Age (times when there was considerable anthropogenic pressure), there was significant sediment accumulation in deltas. The industrial revolution has only amplified the impact of humans on delta growth and retreat.
Deltas in the economy
Ancient deltas benefit the economy due to their well-sorted sand and gravel. Sand and gravel are often quarried from these old deltas and used in concrete for highways, buildings, sidewalks, and landscaping. More than 1 billion tons of sand and gravel are produced in the United States alone. Not all sand and gravel quarries are former deltas, but for ones that are, much of the sorting is already done by the power of water.
Urban areas and human habitation tend to be located in lowlands near water access for transportation and sanitation. This makes deltas a common location for civilizations to flourish due to access to flat land for farming, freshwater for sanitation and irrigation, and sea access for trade. Deltas often host extensive industrial and commercial activities, and agricultural land is frequently in conflict. Some of the world's largest regional economies are located on deltas such as the Pearl River Delta, Yangtze River Delta, European Low Countries and the Greater Tokyo Area.
Examples
The Ganges–Brahmaputra Delta, which spans most of Bangladesh and West Bengal and empties into the Bay of Bengal, is the world's largest delta.
The Selenga River delta in the Russian republic of Buryatia is the largest delta emptying into a body of fresh water, in its case Lake Baikal.
Deltas on Mars
Researchers have found a number of examples of deltas that formed in Martian lakes. Finding deltas is a major sign that Mars once had large amounts of water. Deltas have been found over a wide geographical range. Below are pictures of a few.
| Physical sciences | Hydrology | null |
166945 | https://en.wikipedia.org/wiki/Cartilage | Cartilage | Cartilage is a resilient and smooth type of connective tissue. Semi-transparent and non-porous, it is usually covered by a tough and fibrous membrane called perichondrium. In tetrapods, it covers and protects the ends of long bones at the joints as articular cartilage, and is a structural component of many body parts including the rib cage, the neck and the bronchial tubes, and the intervertebral discs. In other taxa, such as chondrichthyans and cyclostomes, it constitutes a much greater proportion of the skeleton. It is not as hard and rigid as bone, but it is much stiffer and much less flexible than muscle. The matrix of cartilage is made up of glycosaminoglycans, proteoglycans, collagen fibers and, sometimes, elastin. It usually grows quicker than bone.
Because of its rigidity, cartilage often serves the purpose of holding tubes open in the body. Examples include the rings of the trachea, such as the cricoid cartilage and carina.
Cartilage is composed of specialized cells called chondrocytes that produce a large amount of collagenous extracellular matrix, abundant ground substance that is rich in proteoglycan and elastin fibers. Cartilage is classified into three types elastic cartilage, hyaline cartilage, and fibrocartilage which differ in their relative amounts of collagen and proteoglycan.
As cartilage does not contain blood vessels or nerves, it is insensitive. However, some fibrocartilage such as the meniscus of the knee has partial blood supply. Nutrition is supplied to the chondrocytes by diffusion. The compression of the articular cartilage or flexion of the elastic cartilage generates fluid flow, which assists the diffusion of nutrients to the chondrocytes. Compared to other connective tissues, cartilage has a very slow turnover of its extracellular matrix and is documented to repair at only a very slow rate relative to other tissues.
Structure
Development
In embryogenesis, the skeletal system is derived from the mesoderm germ layer. Chondrification (also known as chondrogenesis) is the process by which cartilage is formed from condensed mesenchyme tissue, which differentiates into chondroblasts and begins secreting the molecules (aggrecan and collagen type II) that form the extracellular matrix. In all vertebrates, cartilage is the main skeletal tissue in early ontogenetic stages; in osteichthyans, many cartilaginous elements subsequently ossify through endochondral and perichondral ossification.
Following the initial chondrification that occurs during embryogenesis, cartilage growth consists mostly of the maturing of immature cartilage to a more mature state. The division of cells within cartilage occurs very slowly, and thus growth in cartilage is usually not based on an increase in size or mass of the cartilage itself. It has been identified that non-coding RNAs (e.g. miRNAs and long non-coding RNAs) as the most important epigenetic modulators can affect the chondrogenesis. This also justifies the non-coding RNAs' contribution in various cartilage-dependent pathological conditions such as arthritis, and so on.
Articular cartilage
The articular cartilage function is dependent on the molecular composition of the extracellular matrix (ECM). The ECM consists mainly of proteoglycan and collagens. The main proteoglycan in cartilage is aggrecan, which, as its name suggests, forms large aggregates with hyaluronan and with itself. These aggregates are negatively charged and hold water in the tissue. The collagen, mostly collagen type II, constrains the proteoglycans. The ECM responds to tensile and compressive forces that are experienced by the cartilage. Cartilage growth thus refers to the matrix deposition, but can also refer to both the growth and remodeling of the extracellular matrix. Due to the great stress on the patellofemoral joint during resisted knee extension, the articular cartilage of the patella is among the thickest in the human body. The ECM of articular cartilage is classified into three regions: the pericellular matrix, the territorial matrix, and the interterritorial matrix.
Function
Mechanical properties
The mechanical properties of articular cartilage in load-bearing joints such as the knee and hip have been studied extensively at macro, micro, and nano-scales. These mechanical properties include the response of cartilage in frictional, compressive, shear and tensile loading. Cartilage is resilient and displays viscoelastic properties.
Since cartilage has interstitial fluid that is free-moving, it makes the material difficult to test. One of the tests commonly used to overcome this obstacle is a confined compression test, which can be used in either a 'creep' or 'relaxation' mode. In creep mode, the tissue displacement is measured as a function of time under a constant load, and in relaxation mode, the force is measured as a function of time under constant displacement. In creep mode, the tissue displacement is measured as a function of time under a constant load. During this mode, the deformation of the tissue has two main regions. In the first region, the displacement is rapid due to the initial flow of fluid out of the cartilage, and in the second region, the displacement slows down to an eventual constant equilibrium value. Under the commonly used loading conditions, the equilibrium displacement can take hours to reach.
In both the creep mode and the relaxation mode of a confined compression test, a disc of cartilage is placed in an impervious, fluid-filled container and covered with a porous plate that restricts the flow of interstitial fluid to the vertical direction. This test can be used to measure the aggregate modulus of cartilage, which is typically in the range of 0.5 to 0.9 MPa for articular cartilage, and the Young’s Modulus, which is typically 0.45 to 0.80 MPa. The aggregate modulus is “a measure of the stiffness of the tissue at equilibrium when all fluid flow has ceased”, and Young’s modulus is a measure of how much a material strains (changes length) under a given stress.
The confined compression test can also be used to measure permeability, which is defined as the resistance to fluid flow through a material. Higher permeability allows for fluid to flow out of a material’s matrix more rapidly, while lower permeability leads to an initial rapid fluid flow and a slow decrease to equilibrium. Typically, the permeability of articular cartilage is in the range of 10^-15 to 10^-16 m^4/Ns. However, permeability is sensitive to loading conditions and testing location. For example, permeability varies throughout articular cartilage and tends to be highest near the joint surface and lowest near the bone (or “deep zone”). Permeability also decreases under increased loading of the tissue.
Indentation testing is an additional type of test commonly used to characterize cartilage. Indentation testing involves using an indentor (usually <0.8 mm) to measure the displacement of the tissue under constant load. Similar to confined compression testing, it may take hours to reach equilibrium displacement. This method of testing can be used to measure the aggregate modulus, Poisson's ratio, and permeability of the tissue. Initially, there was a misconception that due to its predominantly water-based composition, cartilage had a Poisson's ratio of 0.5 and should be modeled as an incompressible material. However, subsequent research has disproven this belief. The Poisson’s ratio of articular cartilage has been measured to be around 0.4 or lower in humans and ranges from 0.46–0.5 in bovine subjects.
The mechanical properties of articular cartilage are largely anisotropic, test-dependent, and can be age-dependent. These properties also depend on collagen-proteoglycan interactions and therefore can increase/decrease depending on the total content of water, collagen, glycoproteins, etc. For example, increased glucosaminoglycan content leads to an increase in compressive stiffness, and increased water content leads to a lower aggregate modulus.
Tendon-bone interface
In addition to its role in load-bearing joints, cartilage serves a crucial function as a gradient material between softer tissues and bone. Mechanical gradients are crucial for your body’s function, and for complex artificial structures including joint implants. Interfaces with mismatched material properties lead to areas of high stress concentration which, over the millions of loading cycles experienced by human joins over a lifetime, would eventually lead to failure. For example, the elastic modulus of human bone is roughly 20 GPa while the softer regions of cartilage can be about 0.5 to 0.9 MPa. When there is a smooth gradient of materials properties, however, stresses are distributed evenly across the interface, which puts less wear on each individual part.
The body solves this problem with stiffer, higher modulus layers near bone, with high concentrations of mineral deposits such as hydroxyapatite. Collagen fibers (which provide mechanical stiffness in cartilage) in this region are anchored directly to bones, reducing the possible deformation. Moving closer to soft tissue into the region known as the tidemark, the density of chondrocytes increases and collagen fibers are rearranged to optimize for stress dissipation and low friction. The outermost layer near the articular surface is known as the superficial zone, which primarily serves as a lubrication region. Here cartilage is characterized by a dense extracellular matrix and is rich in proteoglycans (which dispel and reabsorb water to soften impacts) and thin collagen oriented parallel to the joint surface which have excellent shear resistant properties.
Osteoarthritis and natural aging both have negative effects on cartilage as a whole as well as the proper function of the materials gradient within. The earliest changes are often in the superficial zone, the softest and most lubricating part of the tissue. Degradation of this layer can put additional stresses on deeper layers which are not designed to support the same deformations. Another common effect of aging is increased crosslinking of collagen fibers. This leads to stiffer cartilage as a whole, which again can lead to early failure as stiffer tissue is more susceptible to fatigue based failure. Aging in calcified regions also generally leads to a larger number of mineral deposits, which has a similarly undesired stiffening effect. Osteoarthritis has more extreme effects and can entirely wear down cartilage, causing direct bone-to-bone contact.
Frictional properties
Lubricin, a glycoprotein abundant in cartilage and synovial fluid, plays a major role in bio-lubrication and wear protection of cartilage.
Repair
Cartilage has limited repair capabilities: Because chondrocytes are bound in lacunae, they cannot migrate to damaged areas. Therefore, cartilage damage is difficult to heal. Also, because hyaline cartilage does not have a blood supply, the deposition of new matrix is slow. Over the last years, surgeons and scientists have elaborated a series of cartilage repair procedures that help to postpone the need for joint replacement. A tear of the meniscus of the knee cartilage can often be surgically trimmed to reduce problems. Complete healing of cartilage after injury or repair procedures is hindered by cartilage-specific inflammation caused by the involvement of M1/M2 macrophages, mast cells, and their intercellular interactions.
Biological engineering techniques are being developed to generate new cartilage, using a cellular "scaffolding" material and cultured cells to grow artificial cartilage. Extensive researches have been conducted on freeze-thawed PVA hydrogels as a base material for such a purpose. These gels have exhibited great promises in terms of biocompatibility, wear resistance, shock absorption, friction coefficient, flexibility, and lubrication, and thus are considered superior to polyethylene-based cartilages. A two-year implantation of the PVA hydrogels as artificial meniscus in rabbits showed that the gels remain intact without degradation, fracture, or loss of properties.
Clinical significance
Disease
Several diseases can affect cartilage. Chondrodystrophies are a group of diseases, characterized by the disturbance of growth and subsequent ossification of cartilage. Some common diseases that affect the cartilage are listed below.
Osteoarthritis: Osteoarthritis is a disease of the whole joint, however, one of the most affected tissues is the articular cartilage. The cartilage covering bones (articular cartilage—a subset of hyaline cartilage) is thinned, eventually completely wearing away, resulting in a "bone against bone" within the joint, leading to reduced motion, and pain. Osteoarthritis affects the joints exposed to high stress and is therefore considered the result of "wear and tear" rather than a true disease. It is treated by arthroplasty, the replacement of the joint by a synthetic joint often made of a stainless steel alloy (cobalt chromoly) and ultra-high-molecular-weight polyethylene. Chondroitin sulfate or glucosamine sulfate supplements, have been claimed to reduce the symptoms of osteoarthritis, but there is little good evidence to support this claim. In osteoarthritis, increased expression of inflammatory cytokines and chemokines cause aberrant changes in differentiated chondrocytes function which leads to an excess of chondrocyte catabolic activity, mediated by factors including matrix metalloproteinases and aggrecanases.
Traumatic rupture or detachment: The cartilage in the knee is frequently damaged but can be partially repaired through knee cartilage replacement therapy. Often when athletes talk of damaged "cartilage" in their knee, they are referring to a damaged meniscus (a fibrocartilage structure) and not the articular cartilage.
Achondroplasia: Reduced proliferation of chondrocytes in the epiphyseal plate of long bones during infancy and childhood, resulting in dwarfism.
Costochondritis: Inflammation of cartilage in the ribs, causing chest pain.
Spinal disc herniation: Asymmetrical compression of an intervertebral disc ruptures the sac-like disc, causing a herniation of its soft content. The hernia often compresses the adjacent nerves and causes back pain.
Relapsing polychondritis: a destruction, probably autoimmune, of cartilage, especially of the nose and ears, causing disfiguration. Death occurs by asphyxiation as the larynx loses its rigidity and collapses.
Tumors made up of cartilage tissue, either benign or malignant, can occur. They usually appear in bone, rarely in pre-existing cartilage. The benign tumors are called chondroma, the malignant ones chondrosarcoma. Tumors arising from other tissues may also produce a cartilage-like matrix, the best-known being pleomorphic adenoma of the salivary glands.
The matrix of cartilage acts as a barrier, preventing the entry of lymphocytes or diffusion of immunoglobulins. This property allows for the transplantation of cartilage from one individual to another without fear of tissue rejection.
Imaging
Cartilage does not absorb X-rays under normal in vivo conditions, but a dye can be injected into the synovial membrane that will cause the to be absorbed by the dye. The resulting void on the radiographic film between the bone and meniscus represents the cartilage. For in vitro scans, the outer soft tissue is most likely removed, so the cartilage and air boundary are enough to contrast the presence of cartilage due to the refraction of the .
Other animals
Cartilaginous fish
Cartilaginous fish (Chondrichthyes) or sharks, rays and chimaeras have a skeleton composed entirely of cartilage.
Invertebrate cartilage
Cartilage tissue can also be found among some arthropods such as horseshoe crabs, some mollusks such as marine snails and cephalopods, and some annelids like sabellid polychaetes.
Arthropods
The most studied cartilage in arthropods is the branchial cartilage of Limulus polyphemus. It is a vesicular cell-rich cartilage due to the large, spherical and vacuolated chondrocytes with no homologies in other arthropods. Other type of cartilage found in L. polyphemus is the endosternite cartilage, a fibrous-hyaline cartilage with chondrocytes of typical morphology in a fibrous component, much more fibrous than vertebrate hyaline cartilage, with mucopolysaccharides immunoreactive against chondroitin sulfate antibodies. There are homologous tissues to the endosternite cartilage in other arthropods. The embryos of Limulus polyphemus express ColA and hyaluronan in the gill cartilage and the endosternite, which indicates that these tissues are fibrillar-collagen-based cartilage. The endosternite cartilage forms close to Hh-expressing ventral nerve cords and expresses ColA and SoxE, a Sox9 analog. This is also seen in gill cartilage tissue.
Mollusks
In cephalopods, the models used for the studies of cartilage are Octopus vulgaris and Sepia officinalis. The cephalopod cranial cartilage is the invertebrate cartilage that shows more resemblance to the vertebrate hyaline cartilage. The growth is thought to take place throughout the movement of cells from the periphery to the center. The chondrocytes present different morphologies related to their position in the tissue.
The embryos of S. officinalis express ColAa, ColAb, and hyaluronan in the cranial cartilages and other regions of chondrogenesis. This implies that the cartilage is fibrillar-collagen-based. The S. officinalis embryo expresses hh, whose presence causes ColAa and ColAb expression and is also able to maintain proliferating cells undiferentiated. It has been observed that this species presents the expression SoxD and SoxE, analogs of the vertebrate Sox5/6 and Sox9, in the developing cartilage. The cartilage growth pattern is the same as in vertebrate cartilage.
In gastropods, the interest lies in the odontophore, a cartilaginous structure that supports the radula. The most studied species regarding this particular tissue is Busycotypus canaliculatus. The odontophore is a vesicular cell rich cartilage, consisting of vacuolated cells containing myoglobin, surrounded by a low amount of extra cellular matrix containing collagen. The odontophore contains muscle cells along with the chondrocytes in the case of Lymnaea and other mollusks that graze vegetation.
Sabellid polychaetes
The sabellid polychaetes, or feather duster worms, have cartilage tissue with cellular and matrix specialization supporting their tentacles. They present two distinct extracellular matrix regions. These regions are an acellular fibrous region with a high collagen content, called cartilage-like matrix, and collagen lacking a highly cellularized core, called osteoid-like matrix. The cartilage-like matrix surrounds the osteoid-like matrix. The amount of the acellular fibrous region is variable. The model organisms used in the study of cartilage in sabellid polychaetes are Potamilla species and Myxicola infundibulum.
Plants and fungi
Vascular plants, particularly seeds, and the stems of some mushrooms, are sometimes called "cartilaginous", although they contain no cartilage.
| Biology and health sciences | Tissues | null |
167001 | https://en.wikipedia.org/wiki/Concrete%20category | Concrete category | In mathematics, a concrete category is a category that is equipped with a faithful functor to the category of sets (or sometimes to another category). This functor makes it possible to think of the objects of the category as sets with additional structure, and of its morphisms as structure-preserving functions. Many important categories have obvious interpretations as concrete categories, for example the category of topological spaces and the category of groups, and trivially also the category of sets itself. On the other hand, the homotopy category of topological spaces is not concretizable, i.e. it does not admit a faithful functor to the category of sets.
A concrete category, when defined without reference to the notion of a category, consists of a class of objects, each equipped with an underlying set; and for any two objects A and B a set of functions, called homomorphisms, from the underlying set of A to the underlying set of B. Furthermore, for every object A, the identity function on the underlying set of A must be a homomorphism from A to A, and the composition of a homomorphism from A to B followed by a homomorphism from B to C must be a homomorphism from A to C.
Definition
A concrete category is a pair (C,U) such that
C is a category, and
U : C → Set (the category of sets and functions) is a faithful functor.
The functor U is to be thought of as a forgetful functor, which assigns to every object of C its "underlying set", and to every morphism in C its "underlying function".
It is customary to call the morphisms in a concrete category homomorphisms (e.g., group homomorphisms, ring homomorphisms, etc.) Because of the faithfulness of the functor U, the homomorphisms of a concrete category may be formally identified with their underlying functions (i.e., their images under U); the homomorphisms then regain the usual interpretation as "structure-preserving" functions.
A category C is concretizable if there exists a concrete category (C,U);
i.e., if there exists a faithful functor U: C → Set. All small categories are concretizable: define U so that its object part maps each object b of C to the set of all morphisms of C whose codomain is b (i.e. all morphisms of the form f: a → b for any object a of C), and its morphism part maps each morphism g: b → c of C to the function U(g): U(b) → U(c) which maps each member f: a → b of U(b) to the composition gf: a → c, a member of U(c). (Item 6 under Further examples expresses the same U in less elementary language via presheaves.) The Counter-examples section exhibits two large categories that are not concretizable.
Remarks
Contrary to intuition, concreteness is not a property that a category may or may not satisfy, but rather a structure with which a category may or may not be equipped. In particular, a category C may admit several faithful functors into Set. Hence there may be several concrete categories (C, U) all corresponding to the same category C.
In practice, however, the choice of faithful functor is often clear and in this case we simply speak of the "concrete category C". For example, "the concrete category Set" means the pair (Set, I) where I denotes the identity functor Set → Set.
The requirement that U be faithful means that it maps different morphisms between the same objects to different functions. However, U may map different objects to the same set and, if this occurs, it will also map different morphisms to the same function.
For example, if S and T are two different topologies on the same set X, then
(X, S) and (X, T) are distinct objects in the category Top of topological spaces and continuous maps, but mapped to the same set X by the forgetful functor Top → Set. Moreover, the identity morphism (X, S) → (X, S) and the identity morphism (X, T) → (X, T) are considered distinct morphisms in Top, but they have the same underlying function, namely the identity function on X.
Similarly, any set with four elements can be given two non-isomorphic group structures: one isomorphic to , and the other isomorphic to .
Further examples
Any group G may be regarded as an "abstract" category with one arbitrary object, , and one morphism for each element of the group. This would not be counted as concrete according to the intuitive notion described at the top of this article. But every faithful G-set (equivalently, every representation of G as a group of permutations) determines a faithful functor G → Set. Since every group acts faithfully on itself, G can be made into a concrete category in at least one way.
Similarly, any poset P may be regarded as an abstract category with a unique arrow x → y whenever x ≤ y. This can be made concrete by defining a functor D : P → Set which maps each object x to and each arrow x → y to the inclusion map .
The category Rel whose objects are sets and whose morphisms are relations can be made concrete by taking U to map each set X to its power set and each relation to the function defined by . Noting that power sets are complete lattices under inclusion, those functions between them arising from some relation R in this way are exactly the supremum-preserving maps. Hence Rel is equivalent to a full subcategory of the category Sup of complete lattices and their sup-preserving maps. Conversely, starting from this equivalence we can recover U as the composite Rel → Sup → Set of the forgetful functor for Sup with this embedding of Rel in Sup.
The category Setop can be embedded into Rel by representing each set as itself and each function f: X → Y as the relation from Y to X formed as the set of pairs (f(x), x) for all x ∈ X; hence Setop is concretizable. The forgetful functor which arises in this way is the contravariant powerset functor Setop → Set.
It follows from the previous example that the opposite of any concretizable category C is again concretizable, since if U is a faithful functor C → Set then Cop may be equipped with the composite Cop → Setop → Set.
If C is any small category, then there exists a faithful functor P : SetCop → Set which maps a presheaf X to the coproduct . By composing this with the Yoneda embedding Y:C → SetCop one obtains a faithful functor C → Set.
For technical reasons, the category Ban1 of Banach spaces and linear contractions is often equipped not with the "obvious" forgetful functor but the functor U1 : Ban1 → Set which maps a Banach space to its (closed) unit ball.
The category Cat whose objects are small categories and whose morphisms are functors can be made concrete by sending each category C to the set containing its objects and morphisms. Functors can be simply viewed as functions acting on the objects and morphisms.
Counter-examples
The category hTop, where the objects are topological spaces and the morphisms are homotopy classes of continuous functions, is an example of a category that is not concretizable.
While the objects are sets (with additional structure), the morphisms are not actual functions between them, but rather classes of functions.
The fact that there does not exist any faithful functor from hTop to Set was first proven by Peter Freyd.
In the same article, Freyd cites an earlier result that the category of "small categories and natural equivalence-classes of functors" also fails to be concretizable.
Implicit structure of concrete categories
Given a concrete category (C, U) and a cardinal number N, let UN be the functor C → Set determined by UN(c) = (U(c))N.
Then a subfunctor of UN is called an N-ary predicate and a
natural transformation UN → U an N-ary operation.
The class of all N-ary predicates and N-ary operations of a concrete category (C,U), with N ranging over the class of all cardinal numbers, forms a large signature. The category of models for this signature then contains a full subcategory which is equivalent to C.
Relative concreteness
In some parts of category theory, most notably topos theory, it is common to replace the category Set with a different category X, often called a base category.
For this reason, it makes sense to call a pair (C, U) where C is a category and U a faithful functor C → X a concrete category over X.
For example, it may be useful to think of the models of a theory with N sorts as forming a concrete category over SetN.
In this context, a concrete category over Set is sometimes called a construct.
| Mathematics | Category theory | null |
167079 | https://en.wikipedia.org/wiki/Smartphone | Smartphone | A smartphone is a mobile device that combines the functionality of a traditional mobile phone with advanced computing capabilities. It typically has a touchscreen interface, allowing users to access a wide range of applications and services, such as web browsing, email, and social media, as well as multimedia playback and streaming. Smartphones have built-in cameras, GPS navigation, and support for various communication methods, including voice calls, text messaging, and internet-based messaging apps.
Smartphones are distinguished from older-design feature phones by their more advanced hardware capabilities and extensive mobile operating systems, access to the internet, business applications, mobile payments, and multimedia functionality, including music, video, gaming, radio, and television.
Smartphones typically feature metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, various sensors, and support for multiple wireless communication protocols. These devices leverage sensors such as accelerometers, barometers, gyroscopes, and magnetometers, which can be used by both pre-installed and third-party software to enhance functionality. In addition, smartphones are equipped to support a variety of wireless communication standards, including LTE, 5G NR, Wi-Fi, Bluetooth, and satellite navigation. By the mid-2020s, manufacturers began integrating satellite messaging and emergency services, expanding their utility in remote areas without reliable cellular coverage.
Following the rising popularity of the iPhone in the late 2000s, the majority of smartphones have featured thin, slate-like form factors with large, capacitive touch screens with support for multi-touch gestures rather than physical keyboards. Most modern smartphones have the ability for users to download or purchase additional applications from a centralized app store. They often have support for cloud storage and cloud synchronization, and virtual assistants.
Smartphones have largely replaced personal digital assistant (PDA) devices, handheld/palm-sized PCs, portable media players (PMP), point-and-shoot cameras, camcorders, and, to a lesser extent, handheld video game consoles, e-reader devices, pocket calculators, and GPS tracking units.
Since the early 2010s, improved hardware and faster wireless communication have bolstered the growth of the smartphone industry. , over a billion smartphones are sold globally every year. In 2019 alone, 1.54 billion smartphone units were shipped worldwide. , 75.05 percent of the world population were smartphone users.
History
Early smartphones were marketed primarily towards the enterprise market, attempting to bridge the functionality of standalone PDA devices with support for cellular telephony, but were limited by their bulky form, short battery life, slow analog cellular networks, and the immaturity of wireless data services. These issues were eventually resolved with the exponential scaling and miniaturization of MOS transistors down to sub-micron levels (Moore's law), the improved lithium-ion battery, faster digital mobile data networks (Edholm's law), and more mature software platforms that allowed mobile device ecosystems to develop independently of data providers.
In the 2000s, NTT DoCoMo's i-mode platform, BlackBerry, Nokia's Symbian platform, and Windows Mobile began to gain market traction, with models often featuring QWERTY keyboards or resistive touchscreen input and emphasizing access to push email and wireless internet.
Forerunner
In the early 1990s, IBM engineer Frank Canova realised that chip-and-wireless technology was becoming small enough to use in handheld devices. The first commercially available device that could be properly referred to as a "smartphone" began as a prototype called "Angler" developed by Canova in 1992 while at IBM and demonstrated in November of that year at the COMDEX computer industry trade show. A refined version was marketed to consumers in 1994 by BellSouth under the name Simon Personal Communicator. In addition to placing and receiving cellular calls, the touchscreen-equipped Simon could send and receive faxes and emails. It included an address book, calendar, appointment scheduler, calculator, world time clock, and notepad, as well as other visionary mobile applications such as maps, stock reports and news.
The IBM Simon was manufactured by Mitsubishi Electric, which integrated features with its own cellular radio technologies. It featured a liquid-crystal display (LCD) and PC Card support. The Simon was commercially unsuccessful, particularly due to its bulky form factor and limited battery life, using NiCad batteries rather than the nickel–metal hydride batteries commonly used in mobile phones in the 1990s, or lithium-ion batteries used in modern smartphones.
The term "smart phone" (in two words) was not coined until a year after the introduction of the Simon, appearing in print as early as 1995, describing AT&T's PhoneWriter Communicator.
The term "smartphone" (as one word) was first used by Ericsson in 1997 to describe a new device concept, the GS88.
PDA/phone hybrids
Beginning in the mid-to-late 1990s, many people who had mobile phones carried a separate dedicated PDA device, running early versions of operating systems such as Palm OS, Newton OS, Symbian or Windows CE/Pocket PC. These operating systems would later evolve into early mobile operating systems. Most of the "smartphones" in this era were hybrid devices that combined these existing familiar PDA OSes with basic phone hardware. The results were devices that were bulkier than either dedicated mobile phones or PDAs, but allowed a limited amount of cellular Internet access. PDA and mobile phone manufacturers competed in reducing the size of devices. The bulk of these smartphones combined with their high cost and expensive data plans, plus other drawbacks such as expansion limitations and decreased battery life compared to separate standalone devices, generally limited their popularity to "early adopters" and business users who needed portable connectivity.
In March 1996, Hewlett-Packard released the OmniGo 700LX, a modified HP 200LX palmtop PC with a Nokia 2110 mobile phone piggybacked onto it and ROM-based software to support it. It had a 640 × 200 resolution CGA compatible four-shade gray-scale LCD screen and could be used to place and receive calls, and to create and receive text messages, emails and faxes. It was also 100% DOS 5.0 compatible, allowing it to run thousands of existing software titles, including early versions of Windows.
In August 1996, Nokia released the Nokia 9000 Communicator, a digital cellular PDA based on the Nokia 2110 with an integrated system based on the PEN/GEOS 3.0 operating system from Geoworks. The two components were attached by a hinge in what became known as a clamshell design, with the display above and a physical QWERTY keyboard below. The PDA provided e-mail; calendar, address book, calculator and notebook applications; text-based Web browsing; and could send and receive faxes. When closed, the device could be used as a digital cellular telephone.
In June 1999, Qualcomm released the "pdQ Smartphone", a CDMA digital PCS smartphone with an integrated Palm PDA and Internet connectivity.
Subsequent landmark devices included:
The Ericsson R380 (December 2000) by Ericsson Mobile Communications, the first phone running the operating system later named Symbian (it ran EPOC Release 5, which was renamed Symbian OS at Release 6). It had PDA functionality and limited Web browsing on a resistive touchscreen utilizing a stylus. While it was marketed as a "smartphone", users could not install their own software on the device.
The Kyocera 6035 (February 2001), a dual-nature device with a separate Palm OS PDA operating system and CDMA mobile phone firmware. It supported limited Web browsing with the PDA software treating the phone hardware as an attached modem.
The Nokia 9210 Communicator (June 2001), the first phone running Symbian (Release 6) with Nokia's Series 80 platform (v1.0). This was the first Symbian phone platform allowing the installation of additional applications. Like the Nokia 9000 Communicator, it is a large clamshell device with a full physical QWERTY keyboard inside.
Handspring's Treo 180 (2002), the first smartphone that fully integrated the Palm OS on a GSM mobile phone having telephony, SMS messaging and Internet access built into the OS. The 180 model had a thumb-type keyboard and the 180g version had a Graffiti handwriting recognition area, instead.
Japanese cell phones
In 1999, Japanese wireless provider NTT DoCoMo launched i-mode, a new mobile internet platform which provided data transmission speeds up to 9.6 kilobits per second, and access web services available through the platform such as online shopping. NTT DoCoMo's i-mode used cHTML, a language which restricted some aspects of traditional HTML in favor of increasing data speed for the devices. Limited functionality, small screens and limited bandwidth allowed for phones to use the slower data speeds available. The rise of i-mode helped NTT DoCoMo accumulate an estimated 40 million subscribers by the end of 2001, and ranked first in market capitalization in Japan and second globally.
Japanese cell phones increasingly diverged from global standards and trends to offer other forms of advanced services and smartphone-like functionality that were specifically tailored to the Japanese market, such as mobile payments and shopping, near-field communication (NFC) allowing mobile wallet functionality to replace smart cards for transit fares, loyalty cards, identity cards, event tickets, coupons, money transfer, etc., downloadable content like musical ringtones, games, and comics, and 1seg mobile television. Phones built by Japanese manufacturers used custom firmware, however, and did not yet feature standardized mobile operating systems designed to cater to third-party application development, so their software and ecosystems were akin to very advanced feature phones. As with other feature phones, additional software and services required partnerships and deals with providers.
The degree of integration between phones and carriers, unique phone features, non-standardized platforms, and tailoring to Japanese culture made it difficult for Japanese manufacturers to export their phones, especially when demand was so high in Japan that the companies did not feel the need to look elsewhere for additional profits.
The rise of 3G technology in other markets and non-Japanese phones with powerful standardized smartphone operating systems, app stores, and advanced wireless network capabilities allowed non-Japanese phone manufacturers to finally break in to the Japanese market, gradually adopting Japanese phone features like emojis, mobile payments, NFC, etc. and spreading them to the rest of the world.
Early smartphones
Phones that made effective use of any significant data connectivity were still rare outside Japan until the introduction of the Danger Hiptop in 2002, which saw moderate success among U.S. consumers as the T-Mobile Sidekick. Later, in the mid-2000s, business users in the U.S. started to adopt devices based on Microsoft's Windows Mobile, and then BlackBerry smartphones from Research In Motion. American users popularized the term "CrackBerry" in 2006 due to the BlackBerry's addictive nature. In the U.S., the high cost of data plans and relative rarity of devices with Wi-Fi capabilities that could avoid cellular data network usage kept adoption of smartphones mainly to business professionals and "early adopters."
Outside the U.S. and Japan, Nokia was seeing success with its smartphones based on Symbian, originally developed by Psion for their personal organisers, and it was the most popular smartphone OS in Europe during the middle to late 2000s. Initially, Nokia's Symbian smartphones were focused on business with the Eseries, similar to Windows Mobile and BlackBerry devices at the time. From 2002 onwards, Nokia started producing consumer-focused smartphones, popularized by the entertainment-focused Nseries. Until 2010, Symbian was the world's most widely used smartphone operating system.
The touchscreen personal digital assistant (PDA)derived nature of adapted operating systems like Palm OS, the "Pocket PC" versions of what was later Windows Mobile, and the UIQ interface that was originally designed for pen-based PDAs on Symbian OS devices resulted in some early smartphones having stylus-based interfaces. These allowed for virtual keyboards and handwriting input, thus also allowing easy entry of Asian characters.
By the mid-2000s, the majority of smartphones had a physical QWERTY keyboard. Most used a "keyboard bar" form factor, like the BlackBerry line, Windows Mobile smartphones, Palm Treos, and some of the Nokia Eseries. A few hid their full physical QWERTY keyboard in a sliding form factor, like the Danger Hiptop line. Some even had only a numeric keypad using T9 text input, like the Nokia Nseries and other models in the Nokia Eseries. Resistive touchscreens with stylus-based interfaces could still be found on a few smartphones, like the Palm Treos, which had dropped their handwriting input after a few early models that were available in versions with Graffiti instead of a keyboard.
Form factor and operating system shifts
The late 2000s and early 2010s saw a shift in smartphone interfaces away from devices with physical keyboards and keypads to ones with large finger-operated capacitive touchscreens. The first phone of any kind with a large capacitive touchscreen was the LG Prada, announced by LG in December 2006. This was a fashionable feature phone created in collaboration with Italian luxury designer Prada with a 3" 240 x 400 pixel screen, a 2-Megapixel digital camera with 144p video recording ability, an LED flash, and a miniature mirror for self portraits.
In January 2007, Apple Computer introduced the iPhone. It had a 3.5" capacitive touchscreen with twice the common resolution of most smartphone screens at the time, and introduced multi-touch to phones, which allowed gestures such as "pinching" to zoom in or out on photos, maps, and web pages. The iPhone was notable as being the first device of its kind targeted at the mass market to abandon the use of a stylus, keyboard, or keypad typical of contemporary smartphones, instead using a large touchscreen for direct finger input as its main means of interaction.
The iPhone's operating system was also a shift away from older operating systems (which older phones supported and which were adapted from PDAs and feature phones) to an operative system powerful enough to not require using a limited, stripped down web browser that can only render pages specially formatted using technologies such as WML, cHTML, or XHTML and instead ran a version of Apple's Safari browser that could render full websites not specifically designed for mobile phones.
Later Apple shipped a software update that gave the iPhone a built-in on-device App Store allowing direct wireless downloads of third-party software. This kind of centralized App Store and free developer tools quickly became the new main paradigm for all smartphone platforms for software development, distribution, discovery, installation, and payment, in place of expensive developer tools that required official approval to use and a dependence on third-party sources providing applications for multiple platforms.
The advantages of a design with software powerful enough to support advanced applications and a large capacitive touchscreen affected the development of another smartphone OS platform, Android, with a more BlackBerry-like prototype device scrapped in favor of a touchscreen device with a slide-out physical keyboard, as Google's engineers thought at the time that a touchscreen could not completely replace a physical keyboard and buttons. Android is based around a modified Linux kernel, again providing more power than mobile operating systems adapted from PDAs and feature phones. The first Android device, the horizontal-sliding HTC Dream, was released in September 2008.
In 2012, Asus started experimenting with a convertible docking system named PadFone, where the standalone handset can when necessary be inserted into a tablet-sized screen unit with integrated supportive battery and used as such.
In 2013 and 2014, Samsung experimented with the hybrid combination of compact camera and smartphone, releasing the Galaxy S4 Zoom and K Zoom, each equipped with integrated 10× optical zoom lens and manual parameter settings (including manual exposure and focus) years before these were widely adapted among smartphones. The S4 Zoom additionally has a rotary knob ring around the lens and a tripod mount.
While screen sizes have increased, manufacturers have attempted to make smartphones thinner at the expense of utility and sturdiness, since a thinner frame is more vulnerable to bending and has less space for components, namely battery capacity.
Operating system competition
The iPhone and later touchscreen-only Android devices together popularized the slate form factor, based on a large capacitive touchscreen as the sole means of interaction, and led to the decline of earlier, keyboard- and keypad-focused platforms. Later, navigation keys such as the home, back, menu, task and search buttons have also been increasingly replaced by nonphysical touch keys, then virtual, simulated on-screen navigation keys, commonly with access combinations such as a long press of the task key to simulate a short menu key press, as with home button to search. More recent "bezel-less" types have their screen surface space extended to the unit's front bottom to compensate for the display area lost for simulating the navigation keys. While virtual keys offer more potential customizability, their location may be inconsistent among systems depending on screen rotation and software used.
Multiple vendors attempted to update or replace their existing smartphone platforms and devices to better-compete with Android and the iPhone; Palm unveiled a new platform known as webOS for its Palm Pre in late-2009 to replace Palm OS, which featured a focus on a task-based "card" metaphor and seamless synchronization and integration between various online services (as opposed to the then-conventional concept of a smartphone needing a PC to serve as a "canonical, authoritative repository" for user data). HP acquired Palm in 2010 and released several other webOS devices, including the Pre 3 and HP TouchPad tablet. As part of a proposed divestment of its consumer business to focus on enterprise software, HP abruptly ended development of future webOS devices in August 2011, and sold the rights to webOS to LG Electronics in 2013, for use as a smart TV platform.
Research in Motion introduced the vertical-sliding BlackBerry Torch and BlackBerry OS 6 in 2010, which featured a redesigned user interface, support for gestures such as pinch-to-zoom, and a new web browser based on the same WebKit rendering engine used by the iPhone. The following year, RIM released BlackBerry OS 7 and new models in the Bold and Torch ranges, which included a new Bold with a touchscreen alongside its keyboard, and the Torch 9860—the first BlackBerry phone to not include a physical keyboard. In 2013, it replaced the legacy BlackBerry OS with a revamped, QNX-based platform known as BlackBerry 10, with the all-touch BlackBerry Z10 and keyboard-equipped Q10 as launch devices.
In 2010, Microsoft unveiled a replacement for Windows Mobile known as Windows Phone, featuring a new touchscreen-centric user interface built around flat design and typography, a home screen with "live tiles" containing feeds of updates from apps, as well as integrated Microsoft Office apps. In February 2011, Nokia announced that it had entered into a major partnership with Microsoft, under which it would exclusively use Windows Phone on all of its future smartphones, and integrate Microsoft's Bing search engine and Bing Maps (which, as part of the partnership, would also license Nokia Maps data) into all future devices. The announcement led to the abandonment of both Symbian, as well as MeeGo—a Linux-based mobile platform it was co-developing with Intel. Nokia's low-end Lumia 520 saw strong demand and helped Windows Phone gain niche popularity in some markets, overtaking BlackBerry in global market share in 2013.
In mid-June 2012, Meizu released its mobile operating system, Flyme OS.
Many of these attempts to compete with Android and iPhone were short-lived. Over the course of the decade, the two platforms became a clear duopoly in smartphone sales and market share, with BlackBerry, Windows Phone, and other operating systems eventually stagnating to little or no measurable market share. In 2015, BlackBerry began to pivot away from its in-house mobile platforms in favor of producing Android devices, focusing on a security-enhanced distribution of the software. The following year, the company announced that it would also exit the hardware market to focus more on software and its enterprise middleware, and began to license the BlackBerry brand and its Android distribution to third-party OEMs such as TCL for future devices.
In September 2013, Microsoft announced its intent to acquire Nokia's mobile device business for $7.1 billion, as part of a strategy under CEO Steve Ballmer for Microsoft to be a "devices and services" company. Despite the growth of Windows Phone and the Lumia range (which accounted for nearly 90% of all Windows Phone devices sold), the platform never had significant market share in the key U.S. market, and Microsoft was unable to maintain Windows Phone's momentum in the years that followed, resulting in dwindling interest from users and app developers. After Balmer was succeeded by Satya Nadella (who has placed a larger focus on software and cloud computing) as CEO of Microsoft, it took a $7.6 billion write-off on the Nokia assets in July 2015, and laid off nearly the entire Microsoft Mobile unit in May 2016.
Prior to the completion of the sale to Microsoft, Nokia released a series of Android-derived smartphones for emerging markets known as Nokia X, which combined an Android-based platform with elements of Windows Phone and Nokia's feature phone platform Asha, using Microsoft and Nokia services rather than Google.
Camera advancements
The first commercial camera phone was the Kyocera Visual Phone VP-210, released in Japan in May 1999. It was called a "mobile videophone" at the time, and had a 110,000-pixel front-facing camera. It could send up to two images per second over Japan's Personal Handy-phone System (PHS) cellular network, and store up to 20 JPEG digital images, which could be sent over e-mail. The first mass-market camera phone was the J-SH04, a Sharp J-Phone model sold in Japan in November 2000. It could instantly transmit pictures via cell phone telecommunication.
By the mid-2000s, higher-end cell phones commonly had integrated digital cameras. In 2003 camera phones outsold stand-alone digital cameras, and in 2006 they outsold film and digital stand-alone cameras. Five billion camera phones were sold in five years, and by 2007 more than half of the installed base of all mobile phones were camera phones. Sales of separate cameras peaked in 2008.
Many early smartphones did not have cameras at all, and earlier models that had them had low performance and insufficient image and video quality that could not compete with budget pocket cameras and fulfill user's needs. By the beginning of the 2010s almost all smartphones had an integrated digital camera. The decline in sales of stand-alone cameras accelerated due to the increasing use of smartphones with rapidly improving camera technology for casual photography, easier image manipulation, and abilities to directly share photos through the use of apps and web-based services. By 2011, cell phones with integrated cameras were selling hundreds of millions per year. In 2015, digital camera sales were 35.395 million units or only less than a third of digital camera sales numbers at their peak and also slightly less than film camera sold number at their peak.
Contributing to the rise in popularity of smartphones being used over dedicated cameras for photography, smaller pocket cameras have difficulty producing bokeh in images, but nowadays, some smartphones have dual-lens cameras that reproduce the bokeh effect easily, and can even rearrange the level of bokeh after shooting. This works by capturing multiple images with different focus settings, then combining the background of the main image with a macro focus shot.
In 2007, the Nokia N95 was notable as a smartphone that had a 5.0 Megapixel (MP) camera, when most others had cameras with around 3 MP or less than 2 MP. Some specialized feature phones like the LG Viewty, Samsung SGH-G800, and Sony Ericsson K850i, all released later that year, also had 5.0 MP cameras. By 2010, 5.0 MP cameras were common; a few smartphones had 8.0 MP cameras and the Nokia N8, Sony Ericsson Satio, and Samsung M8910 Pixon12 feature phone had 12 MP. The main camera of the 2009 Nokia N86 uniquely features a three-level aperture lens.
The Altek Leo, a 14-megapixel smartphone with 3x optical zoom lens and 720p HD video camera was released in late 2010.
In 2011, the same year the Nintendo 3DS was released, HTC unveiled the Evo 3D, a 3D phone with a dual five-megapixel rear camera setup for spatial imaging, among the earliest mobile phones with more than one rear camera.
The 2012 Samsung Galaxy S3 introduced the ability to capture photos using voice commands.
In 2012, Nokia announced and released the Nokia 808 PureView, featuring a 41-megapixel 1/1.2-inch sensor and a high-resolution f/2.4 Zeiss all-aspherical one-group lens. The high resolution enables four times of lossless digital zoom at 1080p and six times at 720p resolution, using image sensor cropping. The 2013 Nokia Lumia 1020 has a similar high-resolution camera setup, with the addition of optical image stabilization and manual camera settings years before common among high-end mobile phones, although lacking expandable storage that could be of use for accordingly high file sizes.
Mobile optical image stabilization was first introduced by Nokia in 2012 with the Lumia 920, and the earliest known smartphone with an optically stabilized front camera is the HTC 10 from 2016. Optical image stabilization enables prolonged exposure times for low-light photography and smoothing out handheld video shaking, since the appearance of shakes magnifies over a larger display such as a monitor or television set, which would be detrimental to the watching experience.
Since 2012, smartphones have become increasingly able to capture photos while filming. The resolution of those photos resolution may vary between devices. Samsung has used the highest image sensor resolution at the video's aspect ratio, which at 16:9 is 6 Megapixels (3264 × 1836) on the Galaxy S3 and 9.6 Megapixels (4128 × 2322) on the Galaxy S4. The earliest iPhones with such functionality, iPhone 5 and 5s, captured simultaneous photos at 0.9 Megapixels (1280 × 720) while filming.
Starting in 2013 on the Xperia Z1, Sony experimented with real-time augmented reality camera effects such as floating text, virtual plants, volcano, and a dinosaur walking in the scenery. Apple later did similarly in 2017 with the iPhone X.
In the same year, iOS 7 introduced the later widely implemented viewfinder intuition, where exposure value can be adjusted through vertical swiping, after focus and exposure has been set by tapping, and even while locked after holding down for a brief moment. On some devices, this intuition may be restricted by software in video/slow motion modes and for front camera.
In 2013, Samsung unveiled the Galaxy S4 Zoom smartphone with the grip shape of a compact camera and a 10× optical zoom lens, as well as a rotary knob ring around the lens, as used on higher-end compact cameras, and an ISO 1222 tripod mount. It is equipped with manual parameter settings, including for focus and exposure. The successor 2014 Samsung Galaxy K Zoom brought resolution and performance enhancements, but lacks the rotary knob and tripod mount to allow for a more smartphone-like shape with less protruding lens.
The 2014 Panasonic Lumix DMC-CM1 was another attempt at mixing mobile phone with compact camera, so much so that it inherited the Lumix brand. While lacking optical zoom, its image sensor has a format of 1", as used in high-end compact cameras such as the Lumix DMC-LX100 and Sony CyberShot DSC-RX100 series, with multiple times the surface size of a typical mobile camera image sensor, as well as support for light sensitivities of up to ISO 25600, well beyond the typical mobile camera light sensitivity range. , no successor has been released.
In 2013 and 2014, HTC experimentally traded in pixel count for pixel surface size on their One M7 and M8, both with only four megapixels, marketed as UltraPixel, citing improved brightness and less noise in low light, though the more recent One M8 lacks optical image stabilization.
The One M8 additionally was one of the earliest smartphones to be equipped with a dual camera setup. Its software allows generating visual spatial effects such as 3D panning, weather effects, and focus adjustment ("UFocus"), simulating the postphotographic selective focusing capability of images produced by a light-field camera. HTC returned to a high-megapixel single-camera setup on the 2015 One M9.
Meanwhile, in 2014, LG Mobile started experimenting with time-of-flight camera functionality, where a rear laser beam that measures distance accelerates autofocus.
Phase-detection autofocus was increasingly adapted throughout the mid-2010s, allowing for quicker and more accurate focusing than contrast detection.
In 2016, Apple introduced the iPhone 7 Plus, one of the phones to popularize a dual camera setup. The iPhone 7 Plus included a main 12 MP camera along with a 12 MP telephoto camera. In early 2018 Huawei released a new flagship phone, the Huawei P20 Pro, one of the first triple camera lens setups with Leica optics. In late 2018, Samsung released a new mid-range smartphone, the Galaxy A9 (2018) with the world's first quad camera setup. The Nokia 9 PureView was released in 2019 featuring a penta-lens camera system.
2019 saw the commercialization of high resolution sensors, which use pixel binning to capture more light. 48 MP and 64 MP sensors developed by Sony and Samsung are commonly used by several manufacturers. 108 MP sensors were first implemented in late 2019 and early 2020.
Video resolution
With stronger getting chipsets to handle computing workload demands at higher pixel rates, mobile video resolution and framerate has caught up with dedicated consumer-grade cameras over years.
In 2009, the Samsung Omnia HD became the first mobile phone with 720p HD video recording. In the same year, Apple brought video recording initially to the iPhone 3GS, at 480p, whereas the 2007 original iPhone and 2008 iPhone 3G lacked video recording entirely.
720p was more widely adapted in 2010, on smartphones such as the original Samsung Galaxy S, Sony Ericsson Xperia X10, iPhone 4, and HTC Desire HD.
The early 2010s brought a steep increase in mobile video resolution. 1080p mobile video recording was achieved in 2011 on the Samsung Galaxy S2, HTC Sensation, and iPhone 4s.
In 2012 and 2013, select devices with 720p filming at 60 frames per second were released: the Asus PadFone 2 and HTC One M7, unlike flagships of Samsung, Sony, and Apple. However, the 2013 Samsung Galaxy S4 Zoom does support it.
In 2013, the Samsung Galaxy Note 3 introduced 2160p (4K) video recording at 30 frames per second, as well as 1080p doubled to 60 frames per second for smoothness.
Other vendors adapted 2160p recording in 2014, including the optically stabilized LG G3. Apple first implemented it in late 2015 on the iPhone 6s and 6s Plus.
The framerate at 2160p was widely doubled to 60 in 2017 and 2018, starting with the iPhone 8, Galaxy S9, LG G7, and OnePlus 6.
Sufficient computing performance of chipsets and image sensor resolution and its reading speeds have enabled mobile 4320p (8K) filming in 2020, introduced with the Samsung Galaxy S20 and Redmi K30 Pro, though some upper resolution levels were foregone (skipped) throughout development, including 1440p (2.5K), 2880p (5K), and 3240p (6K), except 1440p on Samsung Galaxy front cameras.
Mid-class
Among mid-range smartphone series, the introduction of higher video resolutions was initially delayed by two to three years compared to flagship counterparts. 720p was widely adapted in 2012, including with the Samsung Galaxy S3 Mini, Sony Xperia go, and 1080p in 2013 on the Samsung Galaxy S4 Mini and HTC One mini.
The proliferation of video resolutions beyond 1080p has been postponed by several years. The mid-class Sony Xperia M5 supported 2160p filming in 2016, whereas Samsung's mid-class series such as the Galaxy J and A series were strictly limited to 1080p in resolution and 30 frames per second at any resolution for six years until around 2019, whether and how much for technical reasons is unclear.
Setting
A lower video resolution setting may be desirable to extend recording time by reducing space storage and power consumption.
The camera software of some smartphones is equipped with separate controls for resolution, frame rate, and bit rate. An example of a smartphone with these controls is the LG V10.
Slow motion video
A distinction between different camera software is the method used to store high frame rate video footage, with more recent phones retaining both the image sensor's original output frame rate and audio, while earlier phones do not record audio and stretch the video so it can be played back slowly at default speed.
While the stretched encoding method used on earlier phones enables slow motion playback on video player software that lacks manual playback speed control, typically found on older devices, if the aim were to achieve a slow motion effect, the real-time method used by more recent phones offers greater versatility for video editing, where slowed down portions of the footage can be freely selected by the user, and exported into a separate video. A rudimentary video editing software for this purpose is usually pre-installed. The video can optionally be played back at normal (real-time) speed, acting as usual video.
Development
The earliest smartphone known to feature a slow motion mode is the 2009 Samsung i8000 Omnia II, which can record at QVGA (320×240) at 120 fps (frames per second). Slow motion is not available on the Galaxy S1, Galaxy S2, Galaxy Note 1, and Galaxy S3 flagships.
In early 2012, the HTC One X allowed 768×432 pixel slow motion filming at an undocumented frame rate. The output footage has been measured as a third of real-time speed.
In late 2012, the Galaxy Note 2 brought back slow motion, with D1 (720 × 480) at 120 fps. In early 2013, the Galaxy S4 and HTC One M7 recorded at that frame rate with 800 × 450, followed by the Note 3 and iPhone 5s with 720p (1280 × 720) in late 2013, the latter of which retaines audio and original sensor frame rate, as with all later iPhones. In early 2014, the Sony Xperia Z2 and HTC One M8 adapted this resolution as well. In late 2014, the iPhone 6 doubled the frame rate to 240 fps, and in late 2015, the iPhone 6s added support for 1080p (1920 × 1080) at 120 frames per second. In early 2015, the Galaxy S6 became the first Samsung mobile phone to retain the sensor framerate and audio, and in early 2016, the Galaxy S7 became the first Samsung mobile phone with 240 fps recording, also at 720p.
In early 2015, the MT6795 chipset by MediaTek promised 1080p@480 fps video recording. The project's status remains indefinite.
Since early 2017, starting with the Sony Xperia XZ, smartphones have been released with a slow motion mode that unsustainably records at framerates multiple times as high, by temporarily storing frames on the image sensor's internal burst memory. Such a recording lasts a few real-time seconds at most.
In late 2017, the iPhone 8 brought 1080p at 240 fps, as well as 2160p at 60 fps, followed by the Galaxy S9 in early 2018. In mid-2018, the OnePlus 6 brought 720p at 480 fps, sustainable for one minute.
In early 2021, the OnePlus 9 Pro became the first phone with 2160p at 120 fps.
HDR video
The first smartphones to record HDR video were the early 2013 Sony Xperia Z and mid-2013 Xperia Z Ultra, followed by the early 2014 Galaxy S5, all at 1080p.
Audio recording
Mobile phones with multiple microphones usually allow video recording with stereo audio for spaciality, with Samsung, Sony, and HTC initially implementing it in 2012 on their Samsung Galaxy S3, Sony Xperia S, and HTC One X. Apple implemented stereo audio starting with the 2018 iPhone Xs family and iPhone XR.
Front cameras
Photo
Emphasis is being put on the front camera since the mid-2010s, where front cameras have reached resolutions as high as typical rear cameras, such as the 2015 LG G4 (8 megapixels), Sony Xperia C5 Ultra (13 megapixels), and 2016 Sony Xperia XA Ultra (16 megapixels, optically stabilized). The 2015 LG V10 brought a dual front camera system where the second has a wider angle for group photography. Samsung implemented a front-camera sweep panorama (panorama selfie) feature since the Galaxy Note 4 to extend the field of view.
Video
In 2012, the Galaxy S3 and iPhone 5 brought 720p HD front video recording (at 30 fps). In early 2013, the Samsung Galaxy S4, HTC One M7 and Sony Xperia Z brought 1080p Full HD at that framerate, and in late 2014, the Galaxy Note 4 introduced 1440p video recording on the front camera. Apple adapted 1080p front camera video with the late 2016 iPhone 7.
In 2019, smartphones started adapting 2160p 4K video recording on the front camera, six years after rear camera 2160p commenced with the Galaxy Note 3.
Display advancements
In the early 2010s, larger smartphones with screen sizes of at least diagonal, dubbed "phablets", began to achieve popularity, with the 2011 Samsung Galaxy Note series gaining notably wide adoption. In 2013, Huawei launched the Huawei Mate series, sporting a HD (1280 x 720) IPS+ LCD display, which was considered to be quite large at the time.
Some companies began to release smartphones in 2013 incorporating flexible displays to create curved form factors, such as the Samsung Galaxy Round and LG G Flex.
By 2014, 1440p displays began to appear on high-end smartphones. In 2015, Sony released the Xperia Z5 Premium, featuring a 4K resolution display, although only images and videos could actually be rendered at that resolution (all other software was shown at 1080p).
New trends for smartphone displays began to emerge in 2017, with both LG and Samsung releasing flagship smartphones (LG G6 and Galaxy S8), utilizing displays with taller aspect ratios than the common 16:9 ratio, and a high screen-to-body ratio, also known as a "bezel-less design". These designs allow the display to have a larger diagonal measurement, but with a slimmer width than 16:9 displays with an equivalent screen size.
Another trend popularized in 2017 were displays containing tab-like cut-outs at the top-centre—colloquially known as a "notch"—to contain the front-facing camera, and sometimes other sensors typically located along the top bezel of a device. These designs allow for "edge-to-edge" displays that take up nearly the entire height of the device, with little to no bezel along the top, and sometimes a minimal bottom bezel as well. This design characteristic appeared almost simultaneously on the Sharp Aquos S2 and the Essential Phone, which featured small circular tabs for their cameras, followed just a month later by the iPhone X, which used a wider tab to contain a camera and facial scanning system known as Face ID. The 2016 LG V10 had a precursor to the concept, with a portion of the screen wrapped around the camera area in the top-left corner, and the resulting area marketed as a "second" display that could be used for various supplemental features.
Other variations of the practice later emerged, such as a "hole-punch" camera (such as those of the Honor View 20, and Samsung's Galaxy A8s and Galaxy S10)—eschewing the tabbed "notch" for a circular or rounded-rectangular cut-out within the screen instead, while Oppo released the first "all-screen" phones with no notches at all, including one with a mechanical front camera that pops up from the top of the device (Find X), and a 2019 prototype for a front-facing camera that can be embedded and hidden below the display, using a special partially-translucent screen structure that allows light to reach the image sensor below the panel. The first implementation was the ZTE Axon 20 5G, with a 32 MP sensor manufactured by Visionox.
Displays supporting refresh rates higher than 60 Hz (such as 90 Hz or 120 Hz) also began to appear on smartphones in 2017; initially confined to "gaming" smartphones such as the Razer Phone (2017) and Asus ROG Phone (2018), they later became more common on flagship phones such as the Pixel 4 (2019) and Samsung Galaxy S21 series (2021). Higher refresh rates allow for smoother motion and lower input latency, but often at the cost of battery life. As such, the device may offer a means to disable high refresh rates, or be configured to automatically reduce the refresh rate when there is low on-screen motion.
Multi-tasking
An early implementation of multiple simultaneous tasks on a smartphone display are the picture-in-picture video playback mode ("pop-up play") and "live video list" with playing video thumbnails of the 2012 Samsung Galaxy S3, the former of which was later delivered to the 2011 Samsung Galaxy Note through a software update. Later that year, a split-screen mode was implemented on the Galaxy Note 2, later retrofitted on the Galaxy S3 through the "premium suite upgrade".
The earliest implementation of desktop and laptop-like windowing was on the 2013 Samsung Galaxy Note 3.
Foldable smartphones
Smartphones utilizing flexible displays were theorized as possible once manufacturing costs and production processes were feasible. In November 2018, the startup company Royole unveiled the first commercially available foldable smartphone, the Royole FlexPai. Also that month, Samsung presented a prototype phone featuring an "Infinity Flex Display" at its developers conference, with a smaller, outer display on its "cover", and a larger, tablet-sized display when opened. Samsung stated that it also had to develop a new polymer material to coat the display as opposed to glass. Samsung officially announced the Galaxy Fold, based on the previously demonstrated prototype, in February 2019 for an originally-scheduled release in late-April. Due to various durability issues with the display and hinge systems encountered by early reviewers, the release of the Galaxy Fold was delayed to September to allow for design changes.
In November 2019, Motorola unveiled a variation of the concept with its re-imagining of the Razr, using a horizontally-folding display to create a clamshell form factor inspired by its previous feature phone range of the same name. Samsung would unveil a similar device known as the Galaxy Z Flip the following February.
Other developments in the 2010s
The first smartphone with a fingerprint reader was the Motorola Atrix 4G in 2011. In September 2013, the iPhone 5S was unveiled as the first smartphone on a major U.S. carrier since the Atrix to feature this technology. Once again, the iPhone popularized this concept. One of the barriers of fingerprint reading amongst consumers was security concerns, however Apple was able to address these concerns by encrypting this fingerprint data onto the A7 Processor located inside the phone as well as make sure this information could not be accessed by third-party applications and is not stored in iCloud or Apple servers
In 2012, Samsung introduced the Galaxy S3 (GT-i9300) with retrofittable wireless charging, pop-up video playback, 4G-LTE variant (GT-i9305) quad-core processor.
In 2013, Fairphone launched its first "socially ethical" smartphone at the London Design Festival to address concerns regarding the sourcing of materials in the manufacturing followed by Shiftphone in 2015. In late 2013, QSAlpha commenced production of a smartphone designed entirely around security, encryption and identity protection.
In October 2013, Motorola Mobility announced Project Ara, a concept for a modular smartphone platform that would allow users to customize and upgrade their phones with add-on modules that attached magnetically to a frame. Ara was retained by Google following its sale of Motorola Mobility to Lenovo, but was shelved in 2016. That year, LG and Motorola both unveiled smartphones featuring a limited form of modularity for accessories; the LG G5 allowed accessories to be installed via the removal of its battery compartment, while the Moto Z utilizes accessories attached magnetically to the rear of the device.
Microsoft, expanding upon the concept of Motorola's short-lived "Webtop", unveiled functionality for its Windows 10 operating system for phones that allows supported devices to be docked for use with a PC-styled desktop environment.
Samsung and LG used to be the "last standing" manufacturers to offer flagship devices with user-replaceable batteries.
But in 2015, Samsung succumbed to the minimalism trend set by Apple, introducing the Galaxy S6 without a user-replaceable battery.
In addition, Samsung was criticised for pruning long-standing features such as MHL, MicroUSB 3.0, water resistance and MicroSD card support, of which the latter two came back in 2016 with the Galaxy S7 and S7 Edge.
, the global median for smartphone ownership was 43%. Statista forecast that 2.87 billion people would own smartphones in 2020.
Within the same decade, rapid deployment of LTE cellular network and general availability of smartphones have increased popularity of the streaming television services, and the corresponding mobile TV apps.
Major technologies that began to trend in 2016 included a focus on virtual reality and augmented reality experiences catered towards smartphones, the newly introduced USB-C connector, and improving LTE technologies.
In 2016, adjustable screen resolution known from desktop operating systems was introduced to smartphones for power saving, whereas variable screen refresh rates were popularized in 2020.
In 2018, the first smartphones featuring fingerprint readers embedded within OLED displays were announced, followed in 2019 by an implementation using an ultrasonic sensor on the Samsung Galaxy S10.
In 2019, the majority of smartphones released have more than one camera, are waterproof with IP67 and IP68 ratings, and unlock using facial recognition or fingerprint scanners.
Designs first implemented by Apple have been replicated by other vendors several times. These include a sealed body that does not allow replacing the battery, a lack of the physical audio connector (since the iPhone 7 from 2016), a screen with a cut-out area at the top for the earphone and front-facing camera and sensors (colloquially known as "notch"; since the iPhone X from 2017), the exclusion of a charging wall adapter from the scope of delivery (since the iPhone 12 from 2019), and a camera user interface with circular and usually solid-colour shutter button and a camera mode selector using perpendicular text and separate camera modes for photo and video (since iOS 7 from 2013).
Other developments in the 2020s
In 2020, the first smartphones featuring high-speed 5G network capability were announced.
Since 2020, smartphones have decreasingly been shipped with rudimentary accessories like a power adapter and headphones that have historically been almost invariably within the scope of delivery. This trend was initiated with Apple's iPhone 12, followed by Samsung and Xiaomi on the Galaxy S21 and Mi 11 respectively, months after having mocked the same through advertisements. The reason cited is reducing environmental footprint, though reaching raised charging rates supported by newer models demands a new charger shipped through separate packaging with its own environmental footprint.
With the development of the PinePhone and Librem 5 in the 2020s, there are intensified efforts to make open source GNU/Linux for smartphones a major alternative to iOS and Android. Moreover, associated software enabled convergence (beyond convergent and hybrid apps) by allowing the smartphones to be used like a desktop computer when connected to a keyboard, mouse and monitor.
In the early 2020s, manufacturers began to integrate satellite connectivity into smartphone devices for use in remote areas, where local terrestrial communication infrastructures, such as landline and cellular networks, are not available. Due to the antenna limitations in the conventional phones, in the early stages of implementation satellite connectivity would be limited to the satellite messaging and satellite emergency services.
Hardware
A typical smartphone contains a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, which in turn contain billions of tiny MOS field-effect transistors (MOSFETs). A typical smartphone contains the following MOS IC chips:
Application processor (CMOS system-on-a-chip)
Flash memory (floating-gate MOS memory)
Cellular modem (baseband RF CMOS)
RF transceiver (RF CMOS)
Phone camera image sensor (CMOS image sensor)
Power management integrated circuit (power MOSFETs)
Display driver (LCD or LED driver)
Wireless communication chips (Wi-Fi, Bluetooth, GPS receiver)
Sound chip (audio codec and power amplifier)
Gyroscope
Capacitive touchscreen controller (ASIC and DSP)
RF power amplifier (LDMOS)
Some are also equipped with an FM radio receiver, a hardware notification LED, and an infrared transmitter for use as remote control. A few models have additional sensors such as thermometer for measuring ambient temperature, hygrometer for humidity, and a sensor for ultraviolet ray measurement.
A few smartphones designed around specific purposes are equipped with uncommon hardware such as a projector (Samsung Beam i8520 and Samsung Galaxy Beam i8530), optical zoom lenses (Samsung Galaxy S4 Zoom and Samsung Galaxy K Zoom), thermal camera, and even PMR446 (walkie-talkie radio) transceiver.
Central processing unit
Smartphones have central processing units (CPUs), similar to those in computers, but optimised to operate in low power environments. In smartphones, the CPU is typically integrated in a CMOS (complementary metal–oxide–semiconductor) system-on-a-chip (SoC) application processor.
The performance of mobile CPU depends not only on the clock rate (generally given in multiples of hertz) but also on the memory hierarchy. Because of these challenges, the performance of mobile phone CPUs is often more appropriately given by scores derived from various standardized tests to measure the real effective performance in commonly used applications.
Buttons
Smartphones are typically equipped with a power button and volume buttons. Some pairs of volume buttons are unified. Some are equipped with a dedicated camera shutter button. Units for outdoor use may be equipped with an "SOS" emergency call and "PTT" (push-to-talk button). The presence of physical front-side buttons such as the home and navigation buttons has decreased throughout the 2010s, increasingly becoming replaced by capacitive touch sensors and simulated (on-screen) buttons.
As with classic mobile phones, early smartphones such as the Samsung Omnia II were equipped with buttons for accepting and declining phone calls. Due to the advancements of functionality besides phone calls, these have increasingly been replaced by navigation buttons such as "menu" (also known as "options"), "back", and "tasks". Some early 2010s smartphones such as the HTC Desire were additionally equipped with a "Search" button (🔍) for quick access to a web search engine or apps' internal search feature.
Since 2013, smartphones' home buttons started integrating fingerprint scanners, starting with the iPhone 5s and Samsung Galaxy S5.
Functions may be assigned to button combinations. For example, screenshots can usually be taken using the home and power buttons, with a short press on iOS and one-second holding Android OS, the two most popular mobile operating systems. On smartphones with no physical home button, usually the volume-down button is instead pressed with the power button. Some smartphones have a screenshot and possibly screencast shortcuts in the navigation button bar or the power button menu.
Display
One of the main characteristics of smartphones is the screen. Depending on the device's design, the screen fills most or nearly all of the space on a device's front surface. Many smartphone displays have an aspect ratio of 16:9, but taller aspect ratios became more common in 2017, as well as the aim to eliminate bezels by extending the display surface to as close to the edges as possible.
Screen sizes
Screen sizes are measured in diagonal inches. Phones with screens larger than 5.2 inches are often called "phablets". Smartphones with screens over 4.5 inches in size are commonly difficult to use with only a single hand, since most thumbs cannot reach the entire screen surface; they may need to be shifted around in the hand, held in one hand and manipulated by the other, or used in place with both hands. Due to design advances, some modern smartphones with large screen sizes and "edge-to-edge" designs have compact builds that improve their ergonomics, while the shift to taller aspect ratios have resulted in phones that have larger screen sizes whilst maintaining the ergonomics associated with smaller 16:9 displays.
Panel types
Liquid-crystal displays (LCDs) and organic light-emitting diode (OLED) displays are the most common. Some displays are integrated with pressure-sensitive digitizers, such as those developed by Wacom and Samsung, and Apple's Force Touch system. A few phones, such as the YotaPhone prototype, are equipped with a low-power electronic paper rear display, as used in e-book readers.
Alternative input methods
Some devices are equipped with additional input methods such as a stylus for higher precision input and hovering detection or a self-capacitive touch screens layer for floating finger detection. The latter has been implemented on few phones such as the Samsung Galaxy S4, Note 3, S5, Alpha, and Sony Xperia Sola, making the Galaxy Note 3 the only smartphone with both so far.
Hovering can enable preview tooltips such as on the video player's seek bar, in text messages, and quick contacts on the dial pad, as well as lock screen animations, and the simulation of a hovering mouse cursor on web sites.
Some styluses support hovering as well and are equipped with a button for quick access to relevant tools such as digital post-it notes and highlighting of text and elements when dragging while pressed, resembling drag selection using a computer mouse. Some series such as the Samsung Galaxy Note series and LG G Stylus series have an integrated tray to store the stylus in.
Few devices such as the iPhone 6s until iPhone Xs and Huawei Mate S are equipped with a pressure-sensitive touch screen, where the pressure may be used to simulate a gas pedal in video games, access to preview windows and shortcut menus, controlling the typing cursor, and a weight scale, the latest of which has been rejected by Apple from the App Store.
Some early 2010s HTC smartphones such as the HTC Desire (Bravo) and HTC Legend are equipped with an optical track pad for scrolling and selection.
Notification light
Many smartphones except Apple iPhones are equipped with low-power light-emitting diodes besides the screen that are able to notify the user about incoming messages, missed calls, low battery levels, and facilitate locating the mobile phone in darkness, with marginial power consumption.
To distinguish between the sources of notifications, the colour combination and blinking pattern can vary. Usually three diodes in red, green, and blue (RGB) are able to create a multitude of colour combinations.
Sensors
Smartphones are equipped with a multitude of sensors to enable system features and third-party applications.
Common sensors
Accelerometers and gyroscopes enable automatic control of screen rotation. Uses by third-party software include bubble level simulation. An ambient light sensor allows for automatic screen brightness and contrast adjustment, and an RGB sensor enables the adaption of screen colour.
Many mobile phones are also equipped with a barometer sensor to measure air pressure, such as Samsung since 2012 with the Galaxy S3, and Apple since 2014 with the iPhone 6. It allows estimating and detecting changes in altitude.
A magnetometer can act as a digital compass by measuring Earth's magnetic field.
Rare sensors
Samsung equips their flagship smartphones since the 2014 Galaxy S5 and Galaxy Note 4 with a heart rate sensor to assist in fitness-related uses and act as a shutter key for the front-facing camera.
So far, only the 2013 Samsung Galaxy S4 and Note 3 are equipped with an ambient temperature sensor and a humidity sensor, and only the Note 4 with an ultraviolet radiation sensor which could warn the user about excessive exposure.
A rear infrared laser beam for distance measurement can enable time-of-flight camera functionality with accelerated autofocus, as implemented on select LG mobile phones starting with LG G3 and LG V10.
Due to their currently rare occurrence among smartphones, not much software to utilize these sensors has been developed yet.
Storage
While eMMC (embedded multi media card) flash storage was most commonly used in mobile phones, its successor, UFS (Universal Flash Storage) with higher transfer rates emerged throughout the 2010s for upper-class devices.
Capacity
While the internal storage capacity of mobile phones has been near-stagnant during the first half of the 2010s, it has increased steeper during its second half, with Samsung for example increasing the available internal storage options of their flagship class units from 32 GB to 512 GB within only 2 years between 2016 and 2018.
Memory cards
The space for data storage of some mobile phones can be expanded using MicroSD memory cards, whose capacity has multiplied throughout the 2010s (→ ). Benefits over USB on the go storage and cloud storage include offline availability and privacy, not reserving and protruding from the charging port, no connection instability or latency, no dependence on voluminous data plans, and preservation of the limited rewriting cycles of the device's permanent internal storage. Large amounts of data can be moved immediately between devices by changing memory cards, large-scale data backups can be created offline, and data can be read externally should the smartphone be inoperable.
In case of technical defects which make the device unusable or unbootable as a result of liquid damage, fall damage, screen damage, bending damage, malware, or bogus system updates, etc., data stored on the memory card is likely rescueable externally, while data on the inaccessible internal storage would be lost. A memory card can usually immediately be re-used in a different memory-card-enabled device with no necessity for prior file transfers.
Some dual-SIM mobile phones are equipped with a hybrid slot, where one of the two slots can be occupied by either a SIM card or a memory card. Some models, typically of higher end, are equipped with three slots including one dedicated memory card slot, for simultaneous dual-SIM and memory card usage.
Physical location
The location of both SIM and memory card slots vary among devices, where they might be located accessibly behind the back cover or else behind the battery, the latter of which denies hot swapping.
Mobile phones with non-removable rear cover typically house SIM and memory cards in a small tray on the handset's frame, ejected by inserting a needle tool into a pinhole.
Some earlier mid-range phones such as the 2011 Samsung Galaxy Fit and Ace have a sideways memory card slot on the frame covered by a cap that can be opened without tool.
File transfer
Originally, mass storage access was commonly enabled to computers through USB. Over time, mass storage access was removed, leaving the Media Transfer Protocol as protocol for USB file transfer, due to its non-exclusive access ability where the computer is able to access the storage without it being locked away from the mobile phone's software for the duration of the connection, and no necessity for common file system support, as communication is done through an abstraction layer.
However, unlike mass storage, Media Transfer Protocol lacks parallelism, meaning that only a single transfer can run at a time, for which other transfer requests need to wait to finish. This, for example, denies browsing photos and playing back videos from the device during an active file transfer. Some programs and devices lack support for MTP. In addition, the direct access and random access of files through MTP is not supported. Any file is wholly downloaded from the device before opened.
Sound
Some audio quality enhancing features, such as Voice over LTE and HD Voice have appeared and are often available on newer smartphones. Sound quality can remain a problem due to the design of the phone, the quality of the cellular network and compression algorithms used in long-distance calls. Audio quality can be improved using a VoIP application over Wi-Fi. Cellphones have small speakers so that the user can use a speakerphone feature and talk to a person on the phone without holding it to their ear. The small speakers can also be used to listen to digital audio files of music or speech or watch videos with an audio component, without holding the phone close to the ear. However, integrated speakers may be small and of restricted sound quality to conserve space.
Some mobile phones such as the HTC One M8 and the Sony Xperia Z2 are equipped with stereophonic speakers to create spacial sound when in horizontal orientation.
Audio connector
The 3.5mm headphone receptible (coll. "headphone jack") allows the immediate operation of passive headphones, as well as connection to other external auxiliary audio appliances. Among devices equipped with the connector, it is more commonly located at the bottom (charging port side) than on the top of the device.
The decline of the connector's availability among newly released mobile phones among all major vendors commenced in 2016 with its lack on the Apple iPhone 7. An adapter reserving the charging port can retrofit the plug.
Battery-powered, wireless Bluetooth headphones are an alternative. Those tend to be costlier however due to their need for internal hardware such as a Bluetooth transceiver and a battery with a charging controller, and a Bluetooth coupling is required ahead of each operation.
Battery
Smartphones typically feature lithium-ion or lithium-polymer batteries due to their high energy densities.
Batteries chemically wear down as a result of repeated charging and discharging throughout ordinary usage, losing both energy capacity and output power, which results in loss of processing speeds followed by system outages. Battery capacity may be reduced to 80% after few hundred recharges, and the drop in performance accelerates with time.
Some mobile phones are designed with batteries that can be interchanged upon expiration by the end user, usually by opening the back cover. While such a design had initially been used in most mobile phones, including those with touch screen that were not Apple iPhones, it has largely been usurped throughout the 2010s by permanently built-in, non-replaceable batteries; a design practice criticized for planned obsolescence.
Charging
Due to limitations of electrical currents that existing USB cables' copper wires could handle, charging protocols which make use of elevated voltages such as Qualcomm Quick Charge and MediaTek Pump Express have been developed to increase the power throughput for faster charging, to maximize the usage time without restricted ergonomy and to minimize the time a device needs to be attached to a power source.
The smartphone's integrated charge controller (IC) requests the elevated voltage from a supported charger. "VOOC" by Oppo, also marketed as "dash charge", took the counter approach and increased current to cut out some heat produced from internally regulating the arriving voltage in the end device down to the battery's charging terminal voltage, but is incompatible with existing USB cables, as it requires the thicker copper wires of high-current USB cables. Later, USB Power Delivery (USB-PD) was developed with the aim to standardize the negotiation of charging parameters across devices of up to 100 Watts, but is only supported on cables with USB-C on both endings due to the connector's dedicated PD channels.
While charging rates have been increasing, with 15 watts in 2014, 20 Watts in 2016, and 45 watts in 2018, the power throughput may be throttled down significantly during operation of the device.
Wireless charging has been widely adapted, allowing for intermittent recharging without wearing down the charging port through frequent reconnection, with Qi being the most common standard, followed by Powermat. Due to the lower efficiency of wireless power transmission, charging rates are below that of wired charging, and more heat is produced at similar charging rates.
By the end of 2017, smartphone battery life has become generally adequate; however, earlier smartphone battery life was poor due to the weak batteries that could not handle the significant power requirements of the smartphones' computer systems and color screens.
Smartphone users purchase additional chargers for use outside the home, at work, and in cars and by buying portable external "battery packs". External battery packs include generic models which are connected to the smartphone with a cable, and custom-made models that "piggyback" onto a smartphone's case. In 2016, Samsung had to recall millions of the Galaxy Note 7 smartphones due to an explosive battery issue. For consumer convenience, wireless charging stations have been introduced in some hotels, bars, and other public spaces.
Power management
A technique to minimize power consumption is the panel self-refresh, whereby the image to be shown on the display is not sent at all times from the processor to the integrated controller (IC) of the display component, but only if the information on screen is changed. The display's integrated controller instead memorizes the last screen contents and refreshes the screen by itself. This technology was introduced around 2014 and has reduced power consumption by a few hundred milliwatts.
Cameras
Cameras have become standard features of smartphones. phone cameras are now a highly competitive area of differentiation between models, with advertising campaigns commonly based on a focus on the quality or capabilities of a device's main cameras.
Images are usually saved in the JPEG file format; some high-end phones since the mid-2010s also have RAW imaging capability.
Space constraints
Typically smartphones have at least one main rear-facing camera and a lower-resolution front-facing camera for "selfies" and video chat. Owing to the limited depth available in smartphones for image sensors and optics, rear-facing cameras are often housed in a "bump" that is thicker than the rest of the phone. Since increasingly thin mobile phones have more abundant horizontal space than the depth that is necessary and used in dedicated cameras for better lenses, there is additionally a trend for phone manufacturers to include multiple cameras, with each optimized for a different purpose (telephoto, wide angle, etc.).
Viewed from back, rear cameras are commonly located at the top center or top left corner. A cornered location benefits by not requiring other hardware to be packed around the camera module while increasing ergonomy, as the lens is less likely to be covered when held horizontally.
Modern advanced smartphones have cameras with optical image stabilisation (OIS), larger sensors, bright lenses, and even optical zoom plus RAW images. HDR, "Bokeh mode" with multi lenses and multi-shot night modes are now also familiar. Many new smartphone camera features are being enabled via computational photography image processing and multiple specialized lenses rather than larger sensors and lenses, due to the constrained space available inside phones that are being made as slim as possible.
Dedicated camera button
Some mobile phones such as the Samsung i8000 Omnia 2, some Nokia Lumias and some Sony Xperias are equipped with a physical camera shutter button.
Those with two pressure levels resemble the point-and-shoot intuition of dedicated compact cameras. The camera button may be used as a shortcut to quickly and ergonomically launch the camera software, as it is located more accessibly inside a pocket than the power button.
Back cover materials
Back covers of smartphones are typically made of polycarbonate, aluminium, or glass. Polycarbonate back covers may be glossy or matte, and possibly textured, like dotted on the Galaxy S5 or leathered on the Galaxy Note 3 and Note 4.
While polycarbonate back covers may be perceived as less "premium" among fashion- and trend-oriented users, its utilitarian strengths and technical benefits include durability and shock absorption, greater elasticity against permanent bending like metal, inability to shatter like glass, which facilitates designing it removable; better manufacturing cost efficiency, and no blockage of radio signals or wireless power like metal.
Accessories
A wide range of accessories are sold for smartphones, including cases, memory cards, screen protectors, chargers, wireless power stations, USB On-The-Go adapters (for connecting USB drives and or, in some cases, a HDMI cable to an external monitor), MHL adapters, add-on batteries, power banks, headphones, combined headphone-microphones (which, for example, allow a person to privately conduct calls on the device without holding it to the ear), and Bluetooth-enabled powered speakers that enable users to listen to media from their smartphones wirelessly.
Cases range from relatively inexpensive rubber or soft plastic cases which provide moderate protection from bumps and good protection from scratches to more expensive, heavy-duty cases that combine a rubber padding with a hard outer shell. Some cases have a "book"-like form, with a cover that the user opens to use the device; when the cover is closed, it protects the screen. Some "book"-like cases have additional pockets for credit cards, thus enabling people to use them as wallets.
Accessories include products sold by the manufacturer of the smartphone and compatible products made by other manufacturers.
However, some companies, like Apple, stopped including chargers with smartphones in order to "reduce carbon footprint", etc., causing many customers to pay extra for charging adapters.
Software
Mobile operating systems
A mobile operating system (or mobile OS) is an operating system for phones, tablets, smartwatches, or other mobile devices. Globally, Android and IOS are the two most used mobile operating systems based on usage share, with the former having been the best selling OS globally on all devices since 2013.
Mobile operating systems combine features of a personal computer operating system with other features useful for mobile or handheld use; usually including, and most of the following considered essential in modern mobile systems; a touchscreen, cellular, Bluetooth, Wi-Fi Protected Access, Wi-Fi, Global Positioning System (GPS) mobile navigation, video- and single-frame picture cameras, speech recognition, voice recorder, music player, near-field communication, and infrared blaster. By Q1 2018, over 383 million smartphones were sold with 85.9 percent running Android, 14.1 percent running iOS and a negligible number of smartphones running other OSes. Android alone is more popular than the popular desktop operating system Windows, and in general, smartphone use (even without tablets) exceeds desktop use. Other well-known mobile operating systems are Flyme OS and Harmony OS.
Mobile devices with mobile communications abilities (e.g., smartphones) contain two mobile operating systemsthe main user-facing software platform is supplemented by a second low-level proprietary real-time operating system which operates the radio and other hardware. Research has shown that these low-level systems may contain a range of security vulnerabilities permitting malicious base stations to gain high levels of control over the mobile device.
Mobile apps
A mobile app is a computer program designed to run on a mobile device, such as a smartphone. The term "app" is a short-form of the term "software application".
Application stores
The introduction of Apple's App Store for the iPhone and iPod Touch in July 2008 popularized manufacturer-hosted online distribution for third-party applications (software and computer programs) focused on a single platform. There are a huge variety of apps, including video games, music products and business tools. Up until that point, smartphone application distribution depended on third-party sources providing applications for multiple platforms, such as GetJar, Handango, Handmark, and PocketGear. Following the success of the App Store, other smartphone manufacturers launched application stores, such as Google's Android Market (later renamed to the Google Play Store) and RIM's BlackBerry App World, Android-related app stores like Aptoide, Cafe Bazaar, F-Droid, GetJar, and Opera Mobile Store. In February 2014, 93% of mobile developers were targeting smartphones first for mobile app development.
List of current smartphone brands
Asus
Gionee
Google Pixel
Hisense
Honor
HTC
Huawei
Infinix
iPhone
iQOO
Itel
Lava
Lenovo
LG
Meizu
Motorola
Nokia
Nothing
Nubia
OnePlus
Oppo
POCO
Realme
Redmi
Samsung Galaxy
Sharp
Sony Xperia
TCL
Tecno
Umidigi
Vivo
Xiaomi
ZTE
Sales
Since 1996, smartphone shipments have had positive growth. In November 2011, 27% of all photographs created were taken with camera-equipped smartphones. In September 2012, a study concluded that 4 out of 5 smartphone owners use the device to shop online. Global smartphone sales surpassed the sales figures for feature phones in early 2013. Worldwide shipments of smartphones topped 1 billion units in 2013, up 38% from 2012's 725 million, while comprising a 55% share of the mobile phone market in 2013, up from 42% in 2012. In 2013, smartphone sales began to decline for the first time. In Q1 2016 for the first time the shipments dropped by 3 percent year on year. The situation was caused by the maturing China market. A report by NPD shows that fewer than 10% of US citizens have spent $1,000 or more on smartphones, as they are too expensive for most people, without introducing particularly innovative features, and amid Huawei, Oppo and Xiaomi introducing products with similar feature sets for lower prices. In 2019, smartphone sales declined by 3.2%, the largest in smartphone history, while China and India were credited with driving most smartphone sales worldwide. It is predicted that widespread adoption of 5G will help drive new smartphone sales.
By manufacturer
In 2011, Samsung had the highest shipment market share worldwide, followed by Apple. In 2013, Samsung had 31.3% market share, a slight increase from 30.3% in 2012, while Apple was at 15.3%, a decrease from 18.7% in 2012. Huawei, LG and Lenovo were at about 5% each, significantly better than 2012 figures, while others had about 40%, the same as the previous years figure. Only Apple lost market share, although their shipment volume still increased by 12.9%; the rest had significant increases in shipment volumes of 36 to 92%.
In Q1 2014, Samsung had a 31% share and Apple had 16%. In Q4 2014, Apple had a 20.4% share and Samsung had 19.9%. In Q2 2016, Samsung had a 22.3% share and Apple had 12.9%. In Q1 2017, IDC reported that Samsung was first placed, with 80 million units, followed by Apple with 50.8 million, Huawei with 34.6 million, Oppo with 25.5 million and Vivo with 22.7 million.
Samsung's mobile business is half the size of Apple's, by revenue. Apple business increased very rapidly in the years 2013 to 2017. Realme, a brand owned by Oppo, is the fastest-growing phone brand worldwide since Q2 2019. In China, Huawei and Honor, a brand owned by Huawei, have 46% of market share combined and posted 66% annual growth , amid growing Chinese nationalism. In 2019, Samsung had a 74% market share of 5G smartphones in South Korea.
In the first quarter of 2024, global smartphone shipments rose by 7.8% to 289.4 million units. Samsung, with a 20.8% market share, overtook Apple to become the leading smartphone manufacturer. Apple's smartphone shipments dropped 10%. Xiaomi secured the third spot with a 14.1% market share.
By operating system
Use
Contemporary use and convergence
The rise in popularity of touchscreen smartphones and mobile apps distributed via app stores along with rapidly advancing network, mobile processor, and storage technologies led to a convergence where separate mobile phones, organizers, and portable media players were replaced by a smartphone as the single device most people carried. Advances in digital camera sensors and on-device image processing software more gradually led to smartphones replacing simpler cameras for photographs and video recording. The built-in GPS capabilities and mapping apps on smartphones largely replaced stand-alone satellite navigation devices, and paper maps became less common. Mobile gaming on smartphones greatly grew in popularity, allowing many people to use them in place of handheld game consoles, and some companies tried creating game console/phone hybrids based on phone hardware and software. People frequently have chosen not to get fixed-line telephone service in favor of smartphones. Music streaming apps and services have grown rapidly in popularity, serving the same use as listening to music stations on a terrestrial or satellite radio. Streaming video services are easily accessed via smartphone apps and can be used in place of watching television. People have often stopped wearing wristwatches in favor of checking the time on their smartphones, and many use the clock features on their phones in place of alarm clocks. Mobile phones can also be used as a digital note taking, text editing and memorandum device whose computerization facilitates searching of entries.
Additionally, in many lesser technologically developed regions smartphones are people's first and only means of Internet access due to their portability, with personal computers being relatively uncommon outside of business use. The cameras on smartphones can be used to photograph documents and send them via email or messaging in place of using fax (facsimile) machines. Payment apps and services on smartphones allow people to make less use of wallets, purses, credit and debit cards, and cash. Mobile banking apps can allow people to deposit checks simply by photographing them, eliminating the need to take the physical check to an ATM or teller. Guide book apps can take the place of paper travel and restaurant/business guides, museum brochures, and dedicated audio guide equipment.
Mobile banking and payment
In many countries, mobile phones are used to provide mobile banking services, which may include the ability to transfer cash payments by secure SMS text message. Kenya's M-PESA mobile banking service, for example, allows customers of the mobile phone operator Safaricom to hold cash balances which are recorded on their SIM cards. Cash can be deposited or withdrawn from M-PESA accounts at Safaricom retail outlets located throughout the country and can be transferred electronically from person to person and used to pay bills to companies.
Branchless banking has been successful in South Africa and the Philippines. A pilot project in Bali was launched in 2011 by the International Finance Corporation and an Indonesian bank, Bank Mandiri.
Another application of mobile banking technology is Zidisha, a US-based nonprofit micro-lending platform that allows residents of developing countries to raise small business loans from Web users worldwide. Zidisha uses mobile banking for loan disbursements and repayments, transferring funds from lenders in the United States to borrowers in rural Africa who have mobile phones and can use the Internet.
Mobile payments were first trialled in Finland in 1998 when two Coca-Cola vending machines in Espoo were enabled to work with SMS payments. Eventually, the idea spread and in 1999, the Philippines launched the country's first commercial mobile payments systems with mobile operators Globe and Smart.
Some mobile phones can make mobile payments via direct mobile billing schemes, or through contactless payments if the phone and the point of sale support near-field communication (NFC). Enabling contactless payments through NFC-equipped mobile phones requires the co-operation of manufacturers, network operators, and retail merchants.
Facsimile
Some apps allows for sending and receiving facsimile (fax), over a smartphone, including facsimile data (composed of raster bi-level graphics) generated directly and digitally from document and image file formats.
Films
Films are increasingly made using smartphones and tablets, leading to the rise of dedicated film festivals for such films, including the SmartFone Flick Fest in Sydney, Australia; Dublin Smartphone Film Festival; the International Mobil Film Festival based in San Diego; the Spanish festival Cinephone – Festival Internacional de Cine con Smartphone; the African Smartphone International Film Festival; Toronto Smartphone Film Festival; New York Mobile Film Festival; and others.
Criticism and issues
Social impacts
Manufacture
Cobalt is needed in order to manufacture smartphones' rechargeable batteries. Workers, including children, suffer injuries, amputations, and death as the result of the hazardous working conditions and mine tunnel collapses in the Democratic Republic of the Congo during artisanal mining of cobalt. In 2019 a lawsuit was filed against Apple and other tech companies for the use of child labor in mining cobalt; in 2024 the court ruled that the companies were not liable. Apple announced it would convert to using recycled cobalt by 2025.
Use
In 2012, University of Southern California study found that unprotected adolescent sexual activity was more common among owners of smartphones.
A study conducted by the Rensselaer Polytechnic Institute's (RPI) Lighting Research Center (LRC) concluded that smartphones, or any backlit devices, can seriously affect sleep cycles.
Some persons might become psychologically attached to smartphones, resulting in anxiety when separated from the devices.
A "smombie" (a combination of "smartphone" and "zombie") is a walking person using a smartphone and not paying attention as they walk, possibly risking an accident in the process, an increasing social phenomenon. The issue of slow-moving smartphone users led to the temporary creation of a "mobile lane" for walking in Chongqing, China. The issue of distracted smartphone users led the city of Augsburg, Germany, to embed pedestrian traffic lights in the pavement.
While driving
Mobile phone use while driving—including calling, text messaging, playing media, web browsing, gaming, using mapping apps or operating other phone features—is common but controversial, since it is widely considered dangerous due to what is known as distracted driving. Being distracted while operating a motor vehicle has been shown to increase the risk of accidents. In September 2010, the US National Highway Traffic Safety Administration (NHTSA) reported that 995 people were killed by drivers distracted by phones. In March 2011 a US insurance company, State Farm Insurance, announced the results of a study which showed 19% of drivers surveyed accessed the Internet on a smartphone while driving. Many jurisdictions prohibit the use of mobile phones while driving. In Egypt, Israel, Japan, Portugal and Singapore, both handheld and hands-free calling on a mobile phone (which uses a speakerphone) is banned. In other countries, including the UK and France, and in many US states, calling is only banned on handheld phones, while hands-free calling is permitted.
A 2011 study reported that over 90% of college students surveyed text (initiate, reply or read) while driving.
The scientific literature on the danger of driving while sending a text message from a mobile phone, or texting while driving, is limited. A simulation study at the University of Utah found a sixfold increase in distraction-related accidents when texting. Due to the complexity of smartphones that began to grow more after, this has introduced additional difficulties for law enforcement officials when attempting to distinguish one usage from another in drivers using their devices. This is more apparent in countries which ban both handheld and hands-free usage, rather than those which ban handheld use only, as officials cannot easily tell which function of the phone is being used simply by looking at the driver. This can lead to drivers being stopped for using their device illegally for a call when, in fact, they were using the device legally, for example, when using the phone's incorporated controls for car stereo, GPS or satnav.
A 2010 study reviewed the incidence of phone use while cycling and its effects on behavior and safety. In 2013 a national survey in the US reported the number of drivers who reported using their phones to access the Internet while driving had risen to nearly one of four. A study conducted by the University of Vienna examined approaches for reducing inappropriate and problematic use of mobile phones, such as using phones while driving.
Accidents involving a driver being distracted by being in a call on a phone have begun to be prosecuted as negligence similar to speeding. In the United Kingdom, from 27 February 2007, motorists who are caught using a handheld phone while driving will have three penalty points added to their license in addition to the fine of £60. This increase was introduced to try to stem the increase in drivers ignoring the law. Japan prohibits all use of phones while driving, including use of hands-free devices. New Zealand has banned handheld phone use since 1 November 2009. Many states in the United States have banned text messaging on phones while driving. Illinois became the 17th American state to enforce this law. , 30 states had banned texting while driving, with Kentucky becoming the most recent addition on July 15.
Public Health Law Research maintains a list of distracted driving laws in the United States. This database of laws provides a comprehensive view of the provisions of laws that restrict the use of mobile devices while driving for all 50 states and the District of Columbia between 1992, when first law was passed through December 1, 2010. The dataset contains information on 22 dichotomous, continuous or categorical variables including, for example, activities regulated (e.g., texting versus talking, hands-free versus handheld calls, web browsing, gaming), targeted populations, and exemptions.
Legal
A "patent war" between Samsung and Apple started when the latter claimed that the original Galaxy S Android phone copied the interfaceand possibly the hardwareof Apple's iOS for the iPhone 3GS. There was also smartphone patents licensing and litigation involving Sony Mobile, Google, Apple Inc., Samsung, Microsoft, Nokia, Motorola, HTC, Huawei and ZTE, among others. The conflict is part of the wider "patent wars" between multinational technology and software corporations. To secure and increase market share, companies granted a patent can sue to prevent competitors from using the methods the patent covers. Since the 2010s the number of lawsuits, counter-suits, and trade complaints based on patents and designs in the market for smartphones, and devices based on smartphone OSes such as Android and iOS, has increased significantly. Initial suits, countersuits, rulings, license agreements, and other major events began in 2009 as the smartphone market stated to grow more rapidly by 2012.
Medical
With the rise in number of mobile medical apps in the market place, government regulatory agencies raised concerns on the safety of the use of such applications. These concerns were transformed into regulation initiatives worldwide with the aim of safeguarding users from untrusted medical advice. According to the findings of these medical experts in recent years, excessive smartphone use in society may lead to headaches, sleep disorders and insufficient sleep, while severe smartphone addiction may lead to physical health problems, such as hunchback, muscle relaxation and uneven nutrition.
Impacts on cognition and mental health
There is a debate about beneficial and detrimental impacts of smartphones or smartphone-uses on cognition and mental health.
Security
Smartphone malware is easily distributed through an insecure app store. Often, malware is hidden in pirated versions of legitimate apps, which are then distributed through third-party app stores. Malware risk also comes from what is known as an "update attack", where a legitimate application is later changed to include a malware component, which users then install when they are notified that the app has been updated. As well, one out of three robberies in 2012 in the United States involved the theft of a mobile phone. An online petition has urged smartphone makers to install kill switches in their devices. In 2014, Apple's "Find my iPhone" and Google's "Android Device Manager" can locate, disable, and wipe the data from phones that have been lost or stolen. With BlackBerry Protect in OS version 10.3.2, devices can be rendered unrecoverable to even BlackBerry's own Operating System recovery tools if incorrectly authenticated or dissociated from their account.
Leaked documents from 2013 to 2016 codenamed Vault 7 detail the capabilities of the United States Central Intelligence Agency (CIA) to perform electronic surveillance and cyber warfare, including the ability to compromise the operating systems of most smartphones (including iOS and Android). In 2021, journalists and researchers reported the discovery of spyware, called Pegasus, developed and distributed by a private company which can and has been used to infect iOS and Android smartphones oftenpartly via use of 0-day exploitswithout the need for any user-interaction or significant clues to the user and then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time. Analysis of data traffic by popular smartphones running variants of Android found substantial by-default data collection and sharing with no opt-out by this pre-installed software.
Guidelines for mobile device security were issued by NIST and many other organizations. For conducting a private, in-person meeting, at least one site recommends that the user switch the smartphone off and disconnect the battery.
Sleep
Using smartphones late at night can disturb sleep, due to the blue light and brightly lit screen, which affects melatonin levels and sleep cycles. In an effort to alleviate these issues, "Night Mode" functionality to change the color temperature of a screen to a warmer hue based on the time of day to reduce the amount of blue light generated became available through several apps for Android and the f.lux software for jailbroken iPhones. iOS 9.3 integrated a similar, system-level feature known as "Night Shift." Several Android device manufacturers bypassed Google's initial reluctance to make Night Mode a standard feature in Android and included software for it on their hardware under varying names, before Android Oreo added it to the OS for compatible devices.
It has also been theorized that for some users, addiction to use of their phones, especially before they go to bed, can result in "ego depletion." Many people also use their phones as alarm clocks, which can also lead to loss of sleep.
Replacement of dedicated digital cameras
As the 2010s decade commenced, the sale figures of dedicated compact cameras decreased sharply since mobile phone cameras were increasingly perceived as serving as a sufficient surrogate camera.
Increases in computing power in mobile phones enabled fast image processing and high-resolution filming, with 1080p Full HD being achieved in 2011 and the barrier to 2160p 4K being breached in 2013.
However, due to design and space limitations, smartphones lack several features found even on low-budget compact cameras, including a hot-swappable memory card and battery for nearly uninterrupted operation, physical buttons and knobs for focusing and capturing and zooming, a bolt thread tripod mount, a capacitor-charged xenon flash that exceeds the brightness of smartphones' LED flashlights, and an ergonomic grip for steadier holding during handheld shooting, which enables longer exposure times. Since dedicated cameras can be more spacious, they can house larger image sensors and feature optical zooming.
Since the late 2010s, smartphone manufacturers have bypassed the lack of optical zoom to a limited extent by incorporating additional rear cameras with fixed magnification levels.
Lifespan
In mobile phones released since the second half of the 2010s, operational life span commonly is limited by built-in batteries which are not designed to be interchangeable. The life expectancy of batteries depends on usage intensity of the powered device, where activity (longer usage) and tasks demanding more energy expire the battery earlier.
Lithium-ion and lithium-polymer batteries, those commonly powering portable electronics, additionally wear down more from fuller charge and deeper discharge cycles, and when unused for an extended amount of time while depleted, where self-discharging may lead to a harmful depth of discharge.
Manufacturers have prevented some smartphones from operating after repairs, by associating components' unique serial numbers to the device so it will refuse to operate or disable some functionality in case of a mismatch that would occur after a replacement. Locking of the serial number was first documented in 2015 on the iPhone 6, which would become inoperable from a detected replacement of the "home" button. Later, some functionality was restricted on Apple and Samsung smartphones when a battery replacement not authorized by the vendor was detected.
| Technology | Computer hardware | null |
167120 | https://en.wikipedia.org/wiki/Gravel | Gravel | Gravel () is a loose aggregation of rock fragments. Gravel occurs naturally on Earth as a result of sedimentary and erosive geological processes; it is also produced in large quantities commercially as crushed stone.
Gravel is classified by particle size range and includes size classes from granule- to boulder-sized fragments. In the Udden-Wentworth scale gravel is categorized into granular gravel () and pebble gravel (). ISO 14688 grades gravels as fine, medium, and coarse, with ranges for fine and for coarse. One cubic metre of gravel typically weighs about , or one cubic yard weighs about .
Gravel is an important commercial product, with a number of applications. Almost half of all gravel production is used as aggregate for concrete. Much of the rest is used for road construction, either in the road base or as the road surface (with or without asphalt or other binders.) Naturally occurring porous gravel deposits have a high hydraulic conductivity, making them important aquifers.
Definition and properties
Colloquially, the term gravel is often used to describe a mixture of different size pieces of stone mixed with sand and possibly some clay. The American construction industry distinguishes between gravel (a natural material) and crushed stone (produced artificially by mechanical crushing of rock.)
The technical definition of gravel varies by region and by area of application. Many geologists define gravel simply as loose rounded rock particles over in diameter, without specifying an upper size limit. Gravel is sometimes distinguished from rubble, which is loose rock particles in the same size range but angular in shape. The Udden-Wentworth scale, widely used by geologists in the US, defines granular gravel as particles with a size from and pebble gravel as particles with a size from . This corresponds to all particles with sizes between coarse sand and cobbles.
The U.S. Department of Agriculture and the Soil Science Society of America define gravel as particles from in size, while the German scale (Atterburg) defines gravel as particles from in size. The U.S. Army Corps of Engineers defines gravel as particles under in size that are retained by a number 4 mesh, which has a mesh spacing of . ISO 14688 for soil engineering grades gravels as fine, medium, and coarse with ranges 2 mm to 6.3 mm to 20 mm to 63 mm.
The bulk density of gravel varies from . Natural gravel has a high hydraulic conductivity, sometimes reaching above 1 cm/s.
Origin
Most gravel is derived from disintegration of bedrock as it weathers. Quartz is the most common mineral found in gravel, as it is hard, chemically inert, and lacks cleavage planes along which the rock easily splits. Most gravel particles consist of multiple mineral grains, since few rocks have mineral grains coarser than about in size. Exceptions include quartz veins, pegmatites, deep intrusions, and high-grade metamorphic rock. The rock fragments are rapidly rounded as they are transported by rivers, often within a few tens of kilometers of their source outcrops.
Gravel is deposited as gravel blankets or bars in stream channels; in alluvial fans; in near-shore marine settings, where the gravel is supplied by streams or erosion along the coast; and in the deltas of swift-flowing streams. The upper Mississippi embayment contains extensive chert gravels thought to have their origin less than from the periphery of the embayment.
It has been suggested that wind-formed (aeolian) gravel "megaripples" in Argentina have counterparts on the planet Mars.
Production and uses
Gravel is a major basic raw material in construction. Sand is not usually distinguished from gravel in official statistics, but crushed stone is treated as a separate category. In 2020, sand and gravel together made up 23% of all industrial mineral production in the U.S., with a total value of about $12.6 billion. Some 960 million tons of construction sand and gravel were produced. This greatly exceeds production of industrial sand and gravel (68 million tons), which is mostly sand rather than gravel.
It is estimated that almost half of construction sand and gravel is used as aggregate for concrete. Other important uses include in road construction, as road base or in blacktop; as construction fill; and in myriad minor uses.
Gravel is widely and plentifully distributed, mostly as river deposits, river flood plains, and glacial deposits, so that environmental considerations and quality dictate whether alternatives, such as crushed stone, are more economical. Crushed stone is already displacing natural gravel in the eastern United States, and recycled gravel is also becoming increasingly important.
Etymology
The word gravel comes from the Old French gravele or gravelle.
Types
Different varieties of gravel are distinguished by their composition, origin, and use cases. Types of gravel include:
Bank gravel naturally deposited gravel intermixed with sand or clay found in and next to rivers and streams. Also known as "bank run" or "river run".
Bench gravel a bed of gravel located on the side of a valley above the present stream bottom, indicating the former location of the stream bed when it was at a higher level. The term is most commonly used in Alaska and the Yukon Territory.
Crushed stone rock crushed and graded by screens and then mixed to a blend of stones and fines. It is widely used as a surfacing for roads and driveways, sometimes with tar applied over it. Crushed stone may be made from granite, limestone, dolomite, and other rocks. Also known as "crusher run", DGA (dense grade aggregate) QP (quarry process), and shoulder stone. Crushed stone is distinguished from gravel by the U.S. Geological Survey.
Fine gravel gravel consisting of particles with a diameter of
Lag gravel a surface accumulation of coarse gravel produced by the removal of finer particles.
Pay gravel also known as "pay dirt"; a nickname for gravel with a high concentration of gold and other precious metals. The metals are recovered through gold panning.
Pea gravel also known as "pea shingle" is clean gravel similar in size to garden peas. Used for concrete surfaces, walkways, driveways and as a substrate in home aquariums.
Piedmont gravel a coarse gravel carried down from high places by mountain streams and deposited on relatively flat ground, where the water runs more slowly.
Plateau gravel a layer of gravel on a plateau or other region above the height at which stream-terrace gravel is usually found.
Shingle Coarse, loose, well-rounded, waterworn, specifically alluvial and beach, sediment that is largely composed of smooth and spheroidal or flattened pebbles, cobbles, and sometimes small boulders, generally measuring in diameter.
Relationship to plant life
In locales where gravelly soil is predominant, plant life is generally more sparse. This is due to the inferior ability of gravels to retain moisture, as well as the corresponding paucity of mineral nutrients, since finer soils that contain such minerals are present in smaller amounts.
In the geologic record
Sediments containing over 30% gravel that become lithified into solid rock are termed conglomerate. Conglomerates are widely distributed in sedimentary rock of all ages, but usually as a minor component, making up less than 1% of all sedimentary rock. Alluvial fans likely contain the largest accumulations of gravel in the geologic record. These include conglomerates of the Triassic basins of eastern North America and the New Red Sandstone of south Devon.
| Physical sciences | Petrology | null |
167166 | https://en.wikipedia.org/wiki/Organ%20transplantation | Organ transplantation | Organ transplantation is a medical procedure in which an organ is removed from one body and placed in the body of a recipient, to replace a damaged or missing organ. The donor and recipient may be at the same location, or organs may be transported from a donor site to another location. Organs and/or tissues that are transplanted within the same person's body are called autografts. Transplants that are recently performed between two subjects of the same species are called allografts. Allografts can either be from a living or cadaveric source.
Organs that have been successfully transplanted include the heart, kidneys, liver, lungs, pancreas, intestine, thymus and uterus. Tissues include bones, tendons (both referred to as musculoskeletal grafts), corneae, skin, heart valves, nerves and veins. Worldwide, the kidneys are the most commonly transplanted organs, followed by the liver and then the heart. Corneae and musculoskeletal grafts are the most commonly transplanted tissues; these outnumber organ transplants by more than tenfold.
Organ donors may be living, brain dead, or dead via circulatory death. Tissue may be recovered from donors who die of circulatory death, as well as of brain death – up to 24 hours past the cessation of heartbeat. Unlike organs, most tissues (with the exception of corneas) can be preserved and stored for up to five years, meaning they can be "banked". Transplantation raises a number of bioethical issues, including the definition of death, when and how consent should be given for an organ to be transplanted, and payment for organs for transplantation. Other ethical issues include transplantation tourism (medical tourism) and more broadly the socio-economic context in which organ procurement or transplantation may occur. A particular problem is organ trafficking. There is also the ethical issue of not holding out false hope to patients.
Transplantation medicine is one of the most challenging and complex areas of modern medicine. Some of the key areas for medical management are the problems of transplant rejection, during which the body has an immune response to the transplanted organ, possibly leading to transplant failure and the need to immediately remove the organ from the recipient. When possible, transplant rejection can be reduced through serotyping to determine the most appropriate donor-recipient match and through the use of immunosuppressant drugs.
Types of transplant
Autograft
Autografts are the transplant of tissue to the same person. Sometimes this is done with surplus tissue, tissue that can regenerate, or tissues more desperately needed elsewhere (examples include skin grafts, vein extraction for CABG, etc.). Sometimes an autograft is done to remove the tissue and then treat it or the person before returning it (examples include stem cell autograft and storing blood in advance of surgery). In a rotationplasty, a distal joint is used to replace a more proximal one; typically a foot or ankle joint is used to replace a knee joint. The person's foot is severed and reversed, the knee removed, and the tibia joined with the femur.
Allograft and allotransplantation
An allograft is a transplant of an organ or tissue between two genetically non-identical members of the same species. Most human tissue and organ transplants are allografts. Due to the genetic difference between the organ and the recipient, the recipient's immune system will identify the organ as foreign and attempt to destroy it, causing transplant rejection. The risk of transplant rejection can be estimated by measuring the panel-reactive antibody level.
Isograft
An isograft is a subset of allograft in which organs or tissues are transplanted from a donor to a genetically identical recipient (such as an identical twin). Isografts are differentiated from other types of transplants because while they are anatomically identical to allografts, they do not trigger an immune response.
Xenograft and xenotransplantation
A xenograft is a transplant of organs or tissue from one species to another. An example is porcine heart valve transplant, which is quite common and successful. Another example is attempted piscine–primate (fish to non-human primate) transplant of pancreatic islets. The latter research study was intended to pave the way for potential human use if successful. However, xenotransplantation is often an extremely dangerous type of transplant because of the increased risk of non-functional compatibility, rejection, and disease carried in the tissue. In the opposite direction, attempts are being made to devise a way to transplant human fetal hearts and kidneys into animals for future transplantation into human patients to address the shortage of donor organs.
Domino transplants
In people with cystic fibrosis (CF), where both lungs need to be replaced, it is a technically easier operation with a higher rate of success to replace both the heart and lungs of the recipient with those of the donor. As the recipient's original heart is usually healthy, it can then be transplanted into a second recipient in need of a heart transplant, thus making the person with CF a living heart donor.
In a 2016 case at Stanford Medical Center, a woman who was needing a heart-lung transplant had cystic fibrosis which had led to one lung expanding and the other shrinking, thereby displacing her heart. The second patient who in turn received her heart was a woman with right ventricular dysplasia which had led to a dangerously abnormal rhythm. The dual operations required three surgical teams, including one to remove the heart and lungs from a recently deceased initial donor. The two living recipients did well and had an opportunity to meet six weeks after their simultaneous operations.
Another example of this situation occurs with a special form of liver transplant in which the recipient has familial amyloid polyneuropathy, a disease where the liver slowly produces a protein that damages other organs. The recipient's liver can then be transplanted into an older person for whom the effects of the disease will not necessarily contribute significantly to mortality.
This term also refers to a series of living donor transplants in which one donor donates to the highest recipient on the waiting list and the transplant center utilizes that donation to facilitate multiple transplants. These other transplants are otherwise impossible due to blood type or antibody barriers to transplantation. The "Good Samaritan" kidney is transplanted into one of the other recipients, whose donor in turn donates his or her kidney to an unrelated recipient. This method allows all organ recipients to get a transplant even if their living donor is not a match for them. This further benefits people below any of these recipients on waiting lists, as they move closer to the top of the list for a deceased-donor organ. Johns Hopkins Hospital in Baltimore and Northwestern University's Northwestern Memorial Hospital have received significant attention for pioneering transplants of this kind. In February 2012, the last link in a record 60-person domino chain of 30 kidney transplants was completed.
In May 2023, New York Presbyterian Morgan Stanley Children’s Hospital performed the first domino heart transplantation in a baby, eventually saving two baby girls.
ABO-incompatible transplants
Because very young children (generally under 12 months, but often as old as 24 months) do not have a well-developed immune system, it is possible for them to receive organs from otherwise incompatible donors. This is known as ABO-incompatible (ABOi) transplantation. Graft survival and people's mortality are approximately the same between ABOi and ABO-compatible (ABOc) recipients. While focus has been on infant heart transplants, the principles generally apply to other forms of solid organ transplantation.
The most important factors are that the recipient not have produced isohemagglutinins, and that they have low levels of T cell-independent antigens. United Network for Organ Sharing (UNOS) regulations allow for ABOi transplantation in children under two years of age if isohemagglutinin titers are 1:4 or below, and if there is no matching ABOc recipient. Studies have shown that the period under which a recipient may undergo ABOi transplantation may be prolonged by exposure to nonself A and B antigens. Furthermore, should the recipient (for example, type B-positive with a type AB-positive graft) require eventual retransplantation, the recipient may receive a new organ of either blood type.
Limited success has been achieved in ABO-incompatible heart transplants in adults, though this requires that the adult recipients have low levels of anti-A or anti-B antibodies. Renal transplantation is more successful, with similar long-term graft survival rates to ABOc transplants.
Transplantation in obese individuals
Until recently, people with obesity were not considered appropriate candidate donors for renal transplantation. In 2009, the physicians at the University of Illinois Medical Center performed the first robotic renal transplantation in an obese recipient and have continued to transplant people with a body mass index over 35 using robotic surgery. As of January 2014, over 100 people who would otherwise have been turned down because of their weight have successfully been transplanted.
Impact of Human Herpesvirus 6 (HHV-6) Reactivation on Pediatric Liver Transplantation
Human herpesvirus 6 (HHV-6) reactivation emerges as a notable concern in pediatric liver transplantation, potentially influencing both graft and recipient health. HHV-6, prevalent in a substantial portion of the population, can manifest in liver transplant recipients with inherited chromosomally integrated HHV-6 (iciHHV-6), predisposing them to heightened risks of complications such as graft-versus-host disease and allograft rejections. Recent case studies underscore the significance of HHV-6 reactivation, demonstrating its ability to infect liver grafts and impact recipient outcomes. Clinical management involves early detection, targeted antiviral therapy, and vigilant monitoring post-transplantation, with future research aimed at optimizing preventive measures and therapeutic interventions to mitigate the impact of HHV-6 reactivation on pediatric liver transplant outcomes.
Organs and tissues transplanted
Eye
Eyeball (First successful transplantation of a non-functional eye was performed in 2024)
Chest
Heart (deceased-donor only; porcine xenograft attempted)
Lung (deceased-donor and living-related lung transplantation)
Thymus
Abdomen
Kidney (deceased-donor and living-donor; porcine xenograft attempted)
Liver (deceased-donor, which enables donation of a whole liver; and living-donor, where each donor can provide up to 70% of a liver)
Pancreas (deceased-donor only; a very severe type of diabetes ensues if a live person's entire pancreas is removed)
Intestine (deceased-donor and living-donor; normally refers to the small intestine)
Stomach (deceased-donor only)
Uterus (deceased-donor only)
Testis (deceased-donor and living-donor)
Penis (deceased-donor only)
Tissues, cells and fluids
Hand (deceased-donor only), see first recipient Clint Hallam
Cornea (deceased-donor only) see the ophthalmologist Eduard Zirm
Skin, including face replant (autograft) and face transplant (extremely rare)
Islets of Langerhans (pancreas islet cells) (deceased-donor and living-donor)
Bone marrow or adult stem cell (living-donor and autograft)
Blood transfusion, whole blood or fractionated blood products (living-donor and autograft)
Blood vessels (autograft and deceased-donor)
Heart valve (deceased-donor, living-donor and xenograft [porcine/bovine])
Bone (deceased-donor and living-donor)
Indications for transplantation
Kidney transplantation is becoming increasingly common and is the preferred treatment for end-stage renal failure.
Liver transplantation is the only curative therapy for end-stage liver disease, and the liver is the second most frequently transplanted solid organ.
Pancreatic transplantation is a complex surgical procedure performed in patients with severe chronic diabetes, often in association with renal transplantation.
Heart transplantation is increasingly performed in patients with end-stage heart failure, most often related to ischemic and non-ischemic cardiomyopathies.
Complications
The main complications are procedural complications, infection, acute rejection, cardiac allograft vasculopathy and malignancy.
Non-vascular and vascular complications can occur in the initial post-transplant phase and at later stages. Overall postoperative complications after kidney transplantation occur in approximately 12% to 25% of kidney transplant patients.
Following a transplant, recipients will be given lab draws, ultrasounds, and other tests to see if the transplanted organ is being accepted.
Types of donor
Organ donors may be living or may have died of brain death or circulatory death. Most deceased donors are those who have been pronounced brain dead. Brain dead means the cessation of brain function, typically after receiving an injury (either traumatic or pathological) to the brain, or otherwise cutting off blood circulation to the brain (drowning, suffocation, etc.). Breathing is maintained via artificial sources, which, in turn, maintains heartbeat. Once brain death has been declared, the person can be considered for organ donation. Criteria for brain death vary. Because less than 3% of all deaths in the US are the result of brain death, the overwhelming majority of deaths are ineligible for organ donation, resulting in severe shortages. It is important to note currently that patients that have been pronounced brain dead are one of the most common and ideal donors, since often these donors are young and healthy, thus leading to high quality organs.
Organ donation is possible after cardiac death in some situations, primarily when the person is severely brain-injured and not expected to survive without artificial breathing and mechanical support. Independent of any decision to donate, a person's next-of-kin may decide to end artificial support. If the person is expected to expire within a short period of time after support is withdrawn, arrangements can be made to withdraw that support in an operating room to allow quick recovery of the organs after circulatory death has occurred.
Tissues may be recovered from donors who die of either brain or circulatory death. In general, tissues may be recovered from donors up to 24 hours past the cessation of heartbeat. In contrast to organs, most tissues (with the exception of corneas) can be preserved and stored for up to five years, meaning they can be "banked." Also, more than 60 grafts may be obtained from a single tissue donor. Because of these three factorsthe ability to recover from a non-heart-beating donor, the ability to bank tissue, and the number of grafts available from each donortissue transplants are much more common than organ transplants. The American Association of Tissue Banks estimates that more than one million tissue transplants take place in the United States each year.
Living donor
In living donors, the donor remains alive and donates a renewable tissue, cell, or fluid (e.g., blood, skin), or donates an organ or part of an organ in which the remaining organ can regenerate or take on the workload of the rest of the organ (primarily single kidney donation, partial donation of liver, lung lobe, small bowel). Regenerative medicine may one day allow for laboratory-grown organs, using person's own cells via stem cells, or healthy cells extracted from the failing organs.
Deceased donor
Deceased donors (formerly cadaveric) are people who have been declared brain-dead and whose organs are kept viable by ventilators or other mechanical mechanisms until they can be excised for transplantation. Apart from brainstem-dead donors, who have formed the majority of deceased donors for the last 20 years, there is increasing use of after-circulatory-death donors (formerly non-heart-beating donors) to increase the potential pool of donors as demand for transplants continues to grow. Prior to the legal recognition of brain death in the 1980s, all deceased organ donors had died of circulatory death. These organs have inferior outcomes to organs from a brain-dead donor. For instance, patients who underwent liver transplantation using donation-after-circulatory-death allografts have been shown to have significantly lower graft survival than those from donation-after-brain-death allografts due to biliary complications and primary nonfunction in liver transplantation. However, given the scarcity of suitable organs and the number of people who die waiting, any potentially suitable organ must be considered. Jurisdictions with medically assisted suicide may co-ordinate organ donations from that source.
Allocation of organs
In most countries there is a shortage of suitable organs for transplantation. Countries often have formal systems in place to manage the process of determining who is an organ donor and in what order organ recipients receive available organs.
The overwhelming majority of deceased-donor organs in the United States are allocated by federal contract to the Organ Procurement and Transplantation Network, held since it was created by the Organ Transplant Act of 1984 by the United Network for Organ Sharing, or UNOS. (UNOS does not handle donor cornea tissue; corneal donor tissue is usually handled by multiple eye banks with guidance from the Eye Bank Association of America (EBAA) and Food and Drug Administration (FDA). Individual regional organ procurement organizations, all members of the Organ Procurement and Transplantation Network, are responsible for the identification of suitable donors and collection of the donated organs. UNOS then allocates organs based on the method considered most fair by the leadership in the field. The allocation methodology varies somewhat by organ, and changes periodically. For example, liver allocation is based partially on MELD score (Model of End-Stage Liver Disease), an empirical score based on lab values indicative of the sickness of the person from liver disease. In 1984, the National Organ Transplant Act (NOTA) was passed; it gave way to the Organ Procurement and Transplantation Network, which maintains the organ registry and ensures equitable allocation of organs. The Scientific Registry of Transplant Recipients was also established to conduct ongoing studies into the evaluation and clinical status of organ transplants. In 2000 the Children's Health Act passed and required NOTA to consider special issues around pediatric patients and organ allocation.
An example of "line jumping" occurred in 2003 at Duke University when doctors attempted to correct an initially incorrect transplant. An American teenager received a heart-lung donation with the wrong blood type for her. She then received a second transplant even though she was then in such poor physical shape that she normally would not be considered a good candidate for a transplant.
In an April 2008 article in The Guardian, Steven Tsui, the head of the transplant team at Papworth Hospital in the UK, is quoted in raising the ethical issue of not holding out false hope. He stated, "Conventionally we would say if people's life expectancy was a year or less we would consider them a candidate for a heart transplant. But we also have to manage expectations. If we know that in an average year we will do 30 heart transplants, there is no point putting 60 people on our waiting list, because we know half of them will die and it's not right to give them false hope."
Experiencing somewhat increased popularity, but still very rare, is directed or targeted donation, in which the family of a deceased donor (often honoring the wishes of the deceased) requests an organ be given to a specific person, subverting the allocation system. In the United States, there are various lengths of waiting times due to the different availabilities of organs in different UNOS regions. In other countries such as the UK, only medical factors and the position on the waiting list can affect who receives the organ.
One of the more publicized cases of this type was the 1994 Chester and Patti Szuber transplant. This was the first time that a parent had received a heart donated by one of their own children. Although the decision to accept the heart from his recently killed child was not an easy decision, the Szuber family agreed that giving Patti's heart to her father would have been something that she would have wanted.
Access to organ transplantation is one reason for the growth of medical tourism.
Reasons for donation and ethical issues
Living related donors
Living related donors donate to family members or friends in whom they have an emotional investment. The risk of surgery is offset by the psychological benefit of not losing someone related to them, or not seeing them suffer the ill effects of waiting on a list.
Paired exchange
A "paired-exchange" is a technique of matching willing living donors to compatible recipients using serotyping. For example, a spouse may be willing to donate a kidney to their partner but cannot since there is not a biological match. The willing spouse's kidney is donated to a matching recipient who also has an incompatible but willing spouse. The second donor must match the first recipient to complete the pair exchange. Typically the surgeries are scheduled simultaneously in case one of the donors decides to back out and the couples are kept anonymous from each other until after the transplant. Paired-donor exchange, led by work in the New England Program for Kidney Exchange as well as at Johns Hopkins University and the Ohio organ procurement organizations, may more efficiently allocate organs and lead to more transplants.
Paired exchange programs were popularized in the New England Journal of Medicine article "Ethics of a paired-kidney-exchange program" in 1997 by L.F. Ross. It was also proposed by Felix T. Rapport in 1986 as part of his initial proposals for live-donor transplants "The case for a living emotionally related international kidney donor exchange registry" in Transplant Proceedings. A paired exchange is the simplest case of a much larger exchange registry program where willing donors are matched with any number of compatible recipients. Transplant exchange programs have been suggested as early as 1970: "A cooperative kidney typing and exchange program."
The first pair exchange transplant in the US was in 2001 at Johns Hopkins Hospital. The first complex multihospital kidney exchange involving 12 people was performed in February 2009 by The Johns Hopkins Hospital, Barnes-Jewish Hospital in St. Louis and Integris Baptist Medical Center in Oklahoma City. Another 12-person multihospital kidney exchange was performed four weeks later by Saint Barnabas Medical Center in Livingston, New Jersey, Newark Beth Israel Medical Center and New York-Presbyterian Hospital. Surgical teams led by Johns Hopkins continue to pioneer this field with more complex chains of exchange, such as an eight-way multihospital kidney exchange. In December 2009, a 13 organ 13 recipient matched kidney exchange took place, coordinated through Georgetown University Hospital and Washington Hospital Center, Washington, DC.
Good Samaritan
Good Samaritan or "altruistic" donation is giving a donation to someone that has no prior affiliation with the donor. The idea of altruistic donation is to give with no interest of personal gain, it is out of pure selflessness. On the other hand, the current allocation system does not assess a donor's motive, so altruistic donation is not a requirement. Some people choose to do this out of a personal need to donate. Some donate to the next person on the list; others use some method of choosing a recipient based on criteria important to them. Websites are being developed that facilitate such donation. Over half of the members of the Jesus Christians, an Australian religious group, have donated kidneys in such a fashion.
Financial compensation
Monetary compensation for organ donors, in the form of reimbursement for out-of-pocket expenses, has been legalised in Australia, and strictly only in the case of kidney transplant in the case of Singapore (minimal reimbursement is offered in the case of other forms of organ harvesting by Singapore). Kidney disease organizations in both countries have expressed their support.
In compensated donation, donors get money or other compensation in exchange for their organs. This practice is common in some parts of the world, whether legal or not, and is one of the many factors driving medical tourism.
In the illegal black market the donors may not get sufficient after-operation care, the price of a kidney may be above $160,000, middlemen take most of the money, the operation is more dangerous to both the donor and receiver, and the receiver often gets hepatitis or HIV. In legal markets of Iran the price of a kidney is $2,000 to $4,000.
An article by Gary Becker and Julio Elias on "Introducing Incentives in the market for Live and Cadaveric Organ Donations" said that a free market could help solve the problem of a scarcity in organ transplants. Their economic modeling was able to estimate the price tag for human kidneys ($15,000) and human livers ($32,000).
In the United States, The National Organ Transplant Act of 1984 made organ sales illegal. In the United Kingdom, the Human Organ Transplants Act 1989 first made organ sales illegal, and has been superseded by the Human Tissue Act 2004. In 2007, two major European conferences recommended against the sale of organs. Recent development of websites and personal advertisements for organs among listed candidates has raised the stakes when it comes to the selling of organs, and have also sparked significant ethical debates over directed donation, "good-Samaritan" donation, and the current US organ allocation policy. Bioethicist Jacob M. Appel has argued that organ solicitation on billboards and the internet may actually increase the overall supply of organs.
In an experimental survey, Elias, Lacetera and Macis (2019) find that preferences for compensation for kidney donors have strong moral foundations; participants in the experiment especially reject direct payments by patients, which they find would violate principles of fairness.
Many countries have different approaches to organ donation such as the opt-out approach and many advertisements of organ donors, encouraging people to donate. Although these laws have been implemented in a certain country they are not forced upon everyone as it is an individual decision.
Two books, Kidney for Sale By Owner by Mark Cherry (Georgetown University Press, 2005) and Stakes and Kidneys: Why Markets in Human Body Parts are Morally Imperative by James Stacey Taylor:
(Ashgate Press, 2005), advocate using markets to increase the supply of organs available for transplantation.
In a 2004 journal article economist Alex Tabarrok argues that allowing organ sales, and elimination of organ donor lists will increase supply, lower costs and diminish social anxiety towards organ markets.
Iran has had a legal market for kidneys since 1988. The donor is paid approximately US$1200 by the government and also usually receives additional funds from either the recipient or local charities. The Economist and the Ayn Rand Institute approve and advocate a legal market elsewhere. They argued that if 0.06% of Americans between 19 and 65 were to sell one kidney, the national waiting list would disappear (which, the Economist wrote, happened in Iran). The Economist argued that donating kidneys is no more risky than surrogate motherhood, which can be done legally for pay in most countries.
In Pakistan, 40 percent to 50 percent of the residents of some villages have only one kidney because they have sold the other for a transplant into a wealthy person, probably from another country, said Dr. Farhat Moazam of Pakistan, at a World Health Organization conference. Pakistani donors are offered $2,500 for a kidney but receive only about half of that because middlemen take so much. In Chennai, southern India, poor fishermen and their families sold kidneys after their livelihoods were destroyed by the Indian Ocean tsunami on 26 December 2004. About 100 people, mostly women, sold their kidneys for 40,000–60,000 rupees ($900–1,350). Thilakavathy Agatheesh, 30, who sold a kidney in May 2005 for 40,000 rupees said, "I used to earn some money selling fish but now the post-surgery stomach cramps prevent me from going to work." Most kidney sellers say that selling their kidney was a mistake.
In Cyprus in 2010, police closed a fertility clinic under charges of trafficking in human eggs. The Petra Clinic, as it was known locally, brought in women from Ukraine and Russia for egg harvesting and sold the genetic material to foreign fertility tourists. This sort of reproductive trafficking violates laws in the European Union. In 2010, Scott Carney reported for the Pulitzer Center on Crisis Reporting and the magazine Fast Company explored illicit fertility networks in Spain, the United States and Israel.
Forced donation
There have been concerns that certain authorities are harvesting organs from people deemed undesirable, such as prison populations. The World Medical Association stated that prisoners and other individuals in custody are not in a position to give consent freely, and therefore their organs must not be used for transplantation.
According to former Chinese Deputy Minister of Health, Huang Jiefu, the practice of transplanting organs from executed prisoners is still occurring . World Journal reported Huang had admitted approximately 95% of all organs used for transplantation are from executed prisoners. The lack of a public organ donation program in China is used as a justification for this practice. In July 2006, the Kilgour-Matas report stated, "the source of 41,500 transplants for the six-year period 2000 to 2005 is unexplained" and "we believe that there has been and continues today to be large scale organ seizures from unwilling Falun Gong practitioners". Investigative journalist Ethan Gutmann estimates 65,000 Falun Gong practitioners were killed for their organs from 2000 to 2008. However 2016 reports updated the death toll of the 15-year period since the persecution of Falun Gong began putting the death toll at 150,000 to 1.5 million. In December 2006, after not getting assurances from the Chinese government about allegations relating to Chinese prisoners, the two major organ transplant hospitals in Queensland, Australia stopped transplantation training for Chinese surgeons and banned joint research programs into organ transplantation with China.
In May 2008, two United Nations Special Rapporteurs reiterated their requests for "the Chinese government to fully explain the allegation of taking vital organs from Falun Gong practitioners and the source of organs for the sudden increase in organ transplants that has been going on in China since the year 2000". People in other parts of the world are responding to this availability of organs, and a number of individuals (including US and Japanese citizens) have elected to travel to China or India as medical tourists to receive organ transplants which may have been sourced in what might be considered elsewhere to be unethical manner.
Organ transplantation by region
Some estimates of the number of transplants performed in various regions of the world have been derived from the Global Burden of Disease Study.
According to the Council of Europe, Spain through the Spanish Transplant Organization shows the highest worldwide rate of 35.1 donors per million population in 2005 and 33.8 in 2006. In 2011, it was 35.3.
In addition to the citizens waiting for organ transplants in the US and other developed nations, there are long waiting lists in the rest of the world. More than 2 million people need organ transplants in China, 50,000 waiting in Latin America (90% of whom are waiting for kidneys), as well as thousands more in the less documented continent of Africa. Donor bases vary in developing nations.
In Latin America the donor rate is 40–100 per million per year, similar to that of developed countries. However, in Uruguay, Cuba, and Chile, 90% of organ transplants came from cadaveric donors. Cadaveric donors represent 35% of donors in Saudi Arabia.
There is continuous effort to increase the utilization of cadaveric donors in Asia. However, the popularity of living, single kidney donors in India yields a cadaveric donor prevalence of less than 1 per million population. India has a very low donation rate, as compared to the world average, despite the fact, that it ranks third among the countries with largest transplantation activities.
Traditionally, Muslims believe body desecration in life or death to be forbidden, and thus many reject organ transplant. However most Muslim authorities nowadays accept the practice if another life will be saved. As an example, it may be assumed in countries such as Singapore with a cosmopolitan populace that includes Muslims, a special Majlis Ugama Islam Singapura governing body is formed to look after the interests of Singapore's Muslim community over issues that includes their burial arrangements.
Organ transplantation in Singapore is generally overseen by the National Organ Transplant Unit of the Ministry of Health (Singapore). Due to a diversity in mindsets and religious viewpoints, while Muslims on this island are generally not expected to donate their organs even upon death, youth in Singapore are educated on the Human Organ Transplant Act at the age of 18, which is around the age of military conscription. The Organ Donor Registry maintains two types of information, firstly people of Singapore that donate their organs or bodies for transplantation, research or education upon their death, under the Medical (Therapy, Education and Research) Act (MTERA), and secondly people that object to the removal of kidneys, liver, heart and corneas upon death for the purpose of transplantation, under the Human Organ Transplant Act (HOTA). The Live On social awareness movement is also formed to educate Singaporeans on organ donation.
Organ transplantation in China has taken place since the 1960s, and China has one of the largest transplant programmes in the world, peaking at over 13,000 transplants a year by 2004. Organ donation, however, is against Chinese tradition and culture, and involuntary organ donation is illegal under Chinese law. China's transplant programme attracted the attention of international news media in the 1990s due to ethical concerns about the organs and tissue removed from the corpses of executed criminals being commercially traded. In 2006 it became clear that about 41,500 organs had been sourced from Falun Gong practitioners in China since 2000.
With regard to organ transplantation in Israel, there is a severe organ shortage due to religious objections by some rabbis who oppose all organ donations and others who advocate that a rabbi participate in all decision making regarding a particular donor. One-third of all heart transplants performed on Israelis are done in China; others are done in Europe. Dr. Jacob Lavee, head of the heart-transplant unit, Sheba Medical Center, Tel Aviv, believes that "transplant tourism" is unethical and Israeli insurers should not pay for it. The organization HODS (Halachic Organ Donor Society) is working to increase knowledge and participation in organ donation among Jews throughout the world.
Transplantation rates also differ based on race, sex, and income. A study done with people beginning long term dialysis showed that the sociodemographic barriers to renal transplantation present themselves even before patients are on the transplant list. For example, different groups express definite interest and complete pretransplant workup at different rates. Previous efforts to create fair transplantation policies had focused on people currently on the transplantation waiting list.
In the United States, nearly 35,000 organ transplants were done in 2017, a 3.4 percent increase over 2016. About 18 percent of these were from living donors – people who gave one kidney or a part of their liver to someone else. But 115,000 Americans remain on waiting lists for organ transplants. By September 2022, the US had reached one million organ transplants overall.
History
Successful human allotransplants have a relatively long history of operative skills that were present long before the necessities for post-operative survival were discovered. Rejection and the side effects of preventing rejection (especially infection and nephropathy) were, are, and may always be the key problem.
Several apocryphal accounts of transplants exist well prior to the scientific understanding and advancements that would be necessary for them to have actually occurred. The Chinese physician Pien Chi'ao reportedly exchanged hearts between a man of strong spirit but weak will with one of a man of weak spirit but strong will in an attempt to achieve balance in each man. Roman Catholic accounts report the 3rd-century saints Damian and Cosmas as replacing the gangrenous or cancerous leg of the Roman deacon Justinian with the leg of a recently deceased Ethiopian. Most accounts have the saints performing the transplant in the 4th century, many decades after their deaths; some accounts have them only instructing living surgeons who performed the procedure.
The more likely accounts of early transplants deal with skin transplantation. The first reasonable account is of the Indian surgeon Sushruta in the 2nd century BC, who used autografted skin transplantation in nose reconstruction, a rhinoplasty. Success or failure of these procedures is not well documented. Centuries later, the Italian surgeon Gasparo Tagliacozzi performed successful skin autografts; he also failed consistently with allografts, offering the first suggestion of rejection centuries before that mechanism could possibly be understood. He attributed it to the "force and power of individuality" in his 1596 work De Curtorum Chirurgia per Insitionem.
The first successful corneal allograft transplant was performed in 1837 in a gazelle model; the first successful human corneal transplant, a keratoplastic operation, was performed by Eduard Zirm at Olomouc Eye Clinic, now in the Czech Republic, in 1905.
The first transplant in the modern sense – the implantation of organ tissue in order to replace an organ function – was a thyroid transplant in 1883. It was performed by the Swiss surgeon and later Nobel laureate Theodor Kocher. In the preceding decades Kocher had perfected the removal of excess thyroid tissue in cases of goiter to an extent that he was able to remove the whole organ without the person dying from the operation. Kocher carried out the total removal of the organ in some cases as a measure to prevent recurrent goiter. By 1883, the surgeon noticed that the complete removal of the organ leads to a complex of particular symptoms that we today have learned to associate with a lack of thyroid hormone. Kocher reversed these symptoms by implanting thyroid tissue to these people and thus performed the first organ transplant. In the following years Kocher and other surgeons used thyroid transplantation also to treat thyroid deficiency that appeared spontaneously, without a preceding organ removal.
Thyroid transplantation became the model for a whole new therapeutic strategy: organ transplantation. After the example of the thyroid, other organs were transplanted in the decades around 1900. Some of these transplants were done in animals for purposes of research, where organ removal and transplantation became a successful strategy of investigating the function of organs. Kocher was awarded his Nobel Prize in 1909 for the discovery of the function of the thyroid gland. At the same time, organs were also transplanted for treating diseases in humans. The thyroid gland became the model for transplants of adrenal and parathyroid glands, pancreas, ovary, testicles and kidney. By 1900, the idea that one can successfully treat internal diseases by replacing a failed organ through transplantation had been generally accepted. Pioneering work in the surgical technique of transplantation was made in the early 1900s by the French surgeon Alexis Carrel, with Charles Guthrie, with the transplantation of arteries or veins. Their skillful anastomosis operations and the new suturing techniques laid the groundwork for later transplant surgery and won Carrel the 1912 Nobel Prize in Physiology or Medicine. From 1902, Carrel performed transplant experiments on dogs. Surgically successful in moving kidneys, hearts, and spleens, he was one of the first to identify the problem of rejection, which remained insurmountable for decades. The discovery of transplant immunity by the German surgeon Georg Schöne, various strategies of matching donor and recipient, and the use of different agents for immune suppression did not result in substantial improvement so that organ transplantation was largely abandoned after WWI.
In 1954, the first ever successful transplant of any organ was done at the Brigham & Women's Hospital in Boston. The surgery was performed by American surgeon Dr. Joseph Murray, who received the Nobel Prize in Medicine for his work. The success of this transplant was mostly due to the family relation between the recipient, a Richard Herrick of Maine, and his donor and identical twin brother Ronald. Richard Herrick was in the Navy and became severely ill with acute renal failure. His brother Ronald donated his kidney to Richard, and Richard lived on for another eight years. Prior to this case, transplant recipients did not survive for more than thirty days. Their close family relation meant there was no need for anti-rejection medications, which was not known until this time, so the case shed light on the cause of rejection and of possible anti-rejection medicine.
Major steps in skin transplantation occurred during the First World War, notably in the work of Harold Gillies at Aldershot, United Kingdom. Among his advances was the tubed pedicle graft, which maintained a flesh connection from the donor site until the graft established its own blood flow. Gillies' assistant, Archibald McIndoe, carried on the work into the Second World War as reconstructive surgery. In 1962, the first successful replantation surgery was performed – re-attaching a severed limb and restoring (limited) function and feeling.
Transplant of a single gonad (testis) from a living donor was carried out in early July 1926 in Zaječar, Serbia, by a Russian émigré surgeon Dr. Peter Vasil'evič Kolesnikov. The donor was a convicted murderer, one Ilija Krajan, whose death sentence was commuted to 20 years imprisonment, and he was led to believe that it was done because he had donated his testis to an elderly medical doctor. Both the donor and the receiver survived, but charges were brought in a court of law by the public prosecutor against Dr. Kolesnikov, not for performing the operation, but for lying to the donor.
The first attempted human deceased-donor transplant was performed by the Ukrainian surgeon Yurii Voronoy in the 1930s; but failed due to ischemia. Joseph Murray and J. Hartwell Harrison performed the first successful transplant, a kidney transplant between identical twins, in 1954, because no immunosuppression was necessary for genetically identical individuals.
In the late 1940s British surgeon Peter Medawar, working for the National Institute for Medical Research, improved the understanding of rejection. Identifying the immune reactions in 1951, Medawar suggested that immunosuppressive drugs could be used. Cortisone had been recently discovered and the more effective azathioprine was identified in 1959, but it was not until the discovery of cyclosporine in 1970 that transplant surgery found a sufficiently powerful immunosuppressive.
There was a successful deceased-donor lung transplant into an emphysema and lung cancer patient in June 1963 by James Hardy at the University of Mississippi Medical Center in Jackson, Mississippi. The patient John Russell survived for eighteen days before dying of kidney failure.
Thomas Starzl of Denver attempted a liver transplant in the same year, but he was not successful until 1967.
In the early 1960s and prior to long-term dialysis becoming available, Keith Reemtsma and his colleagues at Tulane University in New Orleans attempted transplants of chimpanzee kidneys into 13 human patients. Most of these patients only lived one to two months. However, in 1964, a 23-year-old woman lived for nine months and even returned to her job as a school teacher until she suddenly collapsed and died. It was assumed that she died from an acute electrolyte disturbance. At autopsy, the kidneys had not been rejected nor was there any other obvious cause of death. One source states this patient died from pneumonia. Tom Starzl and his team in Colorado used baboon kidneys with six human patients who lived one or two months, but with no longer term survivors. Others in the United States and France had limited experiences.
The heart was a major prize for transplant surgeons. But over and above rejection issues, the heart deteriorates within minutes of death, so any operation would have to be performed at great speed. The development of the heart-lung machine was also needed. Lung pioneer James Hardy was prepared to attempt a human heart transplant in 1964, but when a premature failure of comatose Boyd Rush's heart caught Hardy with no human donor, he used a chimpanzee heart, which beat in his patient's chest for approximately one hour and then failed. The first partial success was achieved on 3 December 1967, when Christiaan Barnard of Cape Town, South Africa, performed the world's first human-to-human heart transplant with patient Louis Washkansky as the recipient. Washkansky survived for eighteen days amid what many saw as a distasteful publicity circus. The media interest prompted a spate of heart transplants. Over a hundred were performed in 1968–1969, but almost all the people died within 60 days. Barnard's second patient, Philip Blaiberg, lived for 19 months.
It was the advent of cyclosporine that altered transplants from research surgery to life-saving treatment. In 1968 surgical pioneer Denton Cooley performed 17 transplants, including the first heart-lung transplant. Fourteen of his patients were dead within six months. By 1984 two-thirds of all heart transplant patients survived for five years or more. With organ transplants becoming commonplace, limited only by donors, surgeons moved on to riskier fields, including multiple-organ transplants on humans and whole-body transplant research on animals. On 9 March 1981, the first successful heart-lung transplant took place at Stanford University Hospital. The head surgeon, Bruce Reitz, credited the patient's recovery to cyclosporine.
As the rising success rate of transplants and modern immunosuppression make transplants more common, the need for more organs has become critical. Transplants from living donors, especially relatives, have become increasingly common. Additionally, there is substantive research into xenotransplantation, or transgenic organs; although these forms of transplant are not yet being used in humans, clinical trials involving the use of specific cell types have been conducted with promising results, such as using porcine islets of Langerhans to treat type 1 diabetes. However, there are still many problems that would need to be solved before they would be feasible options in people requiring transplants.
Recently, researchers have been looking into means of reducing the general burden of immunosuppression. Common approaches include avoidance of steroids, reduced exposure to calcineurin inhibitors, increased coverance of vaccination for Vaccine-preventable disease and other means of weaning drugs based on patient outcome and function. While short-term outcomes appear promising, long-term outcomes are still unknown, and in general, reduced immunosuppression increases the risk of rejection and decreases the risk of infection. The risk of early rejection is increased if corticosteroid immunosuppression are avoided or withdrawn after renal transplantation.
Many other new drugs are under development for transplantation.
The emerging field of regenerative medicine promises to solve the problem of organ transplant rejection by regrowing organs in the lab, using person's own cells (stem cells or healthy cells extracted from the donor site).
Timeline of transplants
1869: First skin autograft-transplantation by Carl Bunger, who documented the first modern successful skin graft on a person. Bunger repaired a person's nose destroyed by syphilis by grafting flesh from the inner thigh to the nose, in a method reminiscent of the Sushrutha.
1905: First successful cornea transplant by Eduard Zirm (Czech Republic)
1908: First skin allograft-transplantation of skin from a donor to a recipient (Switzerland)
1931: First uterus transplantation (Lili Elbe).
1950: First successful kidney transplant by Dr. Richard H. Lawler (Chicago, US)
1954: First living related kidney transplant (identical twins) (US)
1954: Brazil's first successful corneal transplant, the first liver (Brazil)
1955: First heart valve allograft into descending aorta (Canada)
1963: First successful lung transplant by James D. Hardy with patient living 18 days (US)
1964: James D. Hardy attempts heart transplant using chimpanzee heart (US)
1964: Human patient lived nine months with chimpanzee kidneys, twelve other human patients only lived one to two months, Keith Reemtsma and team (New Orleans, US)
1965: Spain's first successful kidney transplant at Hospital Clinic de Barcelona, Catalonia, Spain, by a surgeon team led by Josep Maria Gil-Vernet and Antoni Caralps. The patient, a woman, had a very long life since the procedure.
1965: Australia's first successful (living) kidney transplant (Queen Elizabeth Hospital, SA, Australia)
1966: First successful pancreas transplant by Richard C. Lillehei and William Kelly (Minnesota, US)
1967: First successful liver transplant by Thomas Starzl (Denver, US)
1967: First successful heart transplant by Christiaan Barnard (Cape Town, South Africa)
1978 Use of ciclosporin in clinical renal transplants
1981 Use of monoclonal antibodies to lymphocytes in organ grafting
1981: First successful heart/lung transplant by Bruce Reitz (Stanford, US)
1983: First successful lung lobe transplant by Joel Cooper at the Toronto General Hospital (Toronto, Canada)
1984: First successful double organ transplant by Thomas Starzl and Henry T. Bahnson (Pittsburgh, US)
1986: First successful double-lung transplant (Ann Harrison) by Joel Cooper at the Toronto General Hospital (Toronto, Canada)
1990: First successful adult segmental living-related liver transplant by Mehmet Haberal (Ankara, Turkey)
1992: First successful combined liver-kidney transplantation from a living-related donor by Mehmet Haberal (Ankara, Turkey)
1995: First successful laparoscopic live-donor nephrectomy by Lloyd Ratner and Louis Kavoussi (Baltimore, US)
1997: First successful allogeneic vascularized transplantation of a fresh and perfused human knee joint by Gunther O. Hofmann
1997: Illinois' first living donor kidney-pancreas transplant and first robotic living donor pancreatectomy in the US. University of Illinois Medical Center
1998: First successful live-donor partial pancreas transplant by David Sutherland (Minnesota, US)
1998: First successful hand transplant by Dr. Jean-Michel Dubernard (Lyon, France)
1998: United States' first adult-to-adult living donor liver transplant University of Illinois Medical Center
1999: First successful tissue engineered bladder transplanted by Anthony Atala (Boston Children's Hospital, US)
2000: First robotic donor nephrectomy for a living-donor kidney transplant in the world University of Illinois Medical Center
2004: First liver and small bowel transplants from same living donor into same recipient in the world University of Illinois Medical Center
2005: First successful ovarian transplant by Dr. P. N. Mhatre (Wadia Hospital, Mumbai, India)
2005: First successful partial face transplant (France)
2005: First robotic hepatectomy in the United States University of Illinois Medical Center
2006: Illinois' first paired donation for ABO incompatible kidney transplant University of Illinois Medical Center
2006: First jaw transplant to combine donor jaw with bone marrow from the patient, by Eric M. Genden (Mount Sinai Hospital, New York City, US)
2006: First successful human penis transplant (later reversed after 15 days due to 44-year-old recipient's wife's psychological rejection) (Guangzhou, China)
2008: First successful complete full double arm transplant by Edgar Biemer, Christoph Höhnke and Manfred Stangl (Technical University of Munich, Germany)
2008: First baby born from transplanted ovary. The transplant was carried out by Dr Sherman Silber at the Infertility Centre of St Louis in Missouri. The donor is her twin sister.
2008: First transplant of a human windpipe using a patient's own stem cells, by Paolo Macchiarini (Barcelona, Spain)
2008: First successful transplantation of near total area (80%) of face, (including palate, nose, cheeks, and eyelid) by Maria Siemionow (Cleveland Clinic, US)
2009: Worlds' first robotic kidney transplant in an obese patient University of Illinois Medical Center
2010: First full facial transplant by Dr. Joan Pere Barret and team (Hospital Universitari Vall d'Hebron on 26 July 2010, in Barcelona, Spain)
2011: First double leg transplant by Dr. Cavadas and team (Valencia's Hospital, La Fe, Spain)
2012: First simultaneous robotic bariatric surgery (sleeve gastrectomy) and kidney transplantation (university of Illinois at Chicago). (1). (2)
2012: First Robotic Alloparathyroid transplant. University of Illinois Chicago
2013: First successful entire face transplantation as an urgent life-saving surgery at Maria Skłodowska-Curie Institute of Oncology branch in Gliwice, Poland.
2014: First successful uterine transplant resulting in live birth (Sweden)
2014: First successful penis transplant. (South Africa)
2014: First neonatal organ transplant. (UK)
2018: Skin gun invented, which takes a small amount of healthy skin to be grown in a lab, then is sprayed onto burnt skin. This way skin will heal in days instead of months and will not scar.
2019: First drone delivery of a donated kidney, that was then successfully transplanted into a patient. (US)
2021: First transplant of both arms and shoulders performed on an Icelandic patient at the Édouard Herriot Hospital. (FR)
2022: First successful heart transplant from a pig to a human patient. (US) The recipient later died as the pig's heart was infected with porcine cytomegalovirus.
Society and culture
Success rates
Since 2000, there have been approximately 2,200 lung transplants performed each year worldwide. From 2000 to 2006, the median survival period for lung transplant patients has been 5.5 years.
Comparative costs
In China, a kidney transplant operation runs for around $70,000, liver for $160,000, and heart for $120,000.
Safety
In the United States, tissue transplants are regulated by the US Food and Drug Administration (FDA) which sets strict regulations on the safety of the transplants, primarily aimed at the prevention of the spread of communicable disease. Regulations include criteria for donor screening and testing as well as strict regulations on the processing and distribution of tissue grafts. Organ transplants are not regulated by the FDA. It is essential that the HLA complexes of both the donor and recipient be as closely matched as possible to prevent graft rejection.
In November 2007, the CDC reported the first-ever case of HIV and Hepatitis C being simultaneously transferred through an organ transplant. The donor was a 38-year-old male, considered "high-risk" by donation organizations, and his organs transmitted HIV and Hepatitis C to four organ recipients. Experts say that the reason the diseases did not show up on screening tests is probably because they were contracted within three weeks before the donor's death, so antibodies would not have existed in high enough numbers to detect. The crisis has caused many to call for more sensitive screening tests, which could pick up antibodies sooner. Currently, the screens cannot detect the small number of antibodies produced in HIV infections within the last 90 days or Hepatitis C infections within the last 18–21 days before a donation is made.
Nucleic acid testing is now being done by many organ procurement organizations and is able to detect HIV and hepatitis C directly within seven to ten days of exposure to the virus.
Transplant laws
Both developing and developed countries have forged various policies to try to increase the safety and availability of organ transplants to their citizens. However, whilst potential recipients in developing countries may mirror their more developed counterparts in desperation, potential donors in developing countries do not. The Indian government has had difficulty tracking the flourishing organ black market in their country, but in recent times it has amended its organ transplant law to make punishment more stringent for commercial dealings in organs. It has also included new clauses in the law to support deceased organ donation, such as making it mandatory to request for organ donation in case of brain death. Other countries victimized by illegal organ trade have also implemented legislative reactions. Moldova has made international adoption illegal in fear of organ traffickers. China has made selling of organs illegal as of July 2006 and claims that all prisoner organ donors have filed consent. However, doctors in other countries, such as the United Kingdom, have accused China of abusing its high capital punishment rate. Despite these efforts, illegal organ trafficking continues to thrive and can be attributed to corruption in healthcare systems, which has been traced as high up as the doctors themselves in China and Ukraine, and the blind eye economically strained governments and health care programs must sometimes turn to organ trafficking. Some organs are also shipped to Uganda and the Netherlands. This was a main product in the triangular trade in 1934.
Starting on 1 May 2007, doctors involved in commercial trade of organs will face fines and suspensions in China. Only a few certified hospitals will be allowed to perform organ transplants in order to curb illegal transplants. Harvesting organs without donor's consent was also deemed a crime.
On 27 June 2008, Indonesian, Sulaiman Damanik, 26, pleaded guilty in Singapore court for sale of his kidney to CK Tang's executive chair, Tang Wee Sung, 55, for 150 million rupiah (S$22,200). The Transplant Ethics Committee must approve living donor kidney transplants. Organ trading is banned in Singapore and in many other countries to prevent the exploitation of "poor and socially disadvantaged donors who are unable to make informed choices and suffer potential medical risks." Toni, 27, the other accused, donated a kidney to an Indonesian patient in March, alleging he was the patient's adopted son, and was paid 186 million rupiah (US$20,200). Upon sentence, both would suffer each, 12 months in jail or 10,000 Singapore dollars (US$7,600) fine.
In an article appearing in the April 2004 issue of Econ Journal Watch, economist Alex Tabarrok examined the impact of direct consent laws on transplant organ availability. Tabarrok found that social pressures resisting the use of transplant organs decreased over time as the opportunity of individual decisions increased. Tabarrok concluded his study suggesting that gradual elimination of organ donation restrictions and move to a free market in organ sales will increase supply of organs and encourage broader social acceptance of organ donation as a practice.
In the United States 24 states have no law preventing discrimination against potential organ recipients based on cognitive ability, including children. A 2008 study found that of the transplant centers surveyed in those states 85 percent considered disability when deciding transplant list and forty four percent would deny an organ transplant to a child with a neurodevelopmental disability.
Ethical concerns
The existence and distribution of organ transplantation procedures in developing countries, while almost always beneficial to those receiving them, raise many ethical concerns. Both the source and method of obtaining the organ to transplant are major ethical issues to consider, as well as the notion of distributive justice. The World Health Organization argues that transplantations promote health, but the notion of "transplantation tourism" has the potential to violate human rights or exploit the poor, to have unintended health consequences, and to provide unequal access to services, all of which ultimately may cause harm. Regardless of the "gift of life", in the context of developing countries, this might be coercive. The practice of coercion could be considered exploitative of the poor population, violating basic human rights according to Articles 3 and 4 of the Universal Declaration of Human Rights. There is also a powerful opposing view, that trade in organs, if properly and effectively regulated to ensure that the seller is fully informed of all the consequences of donation, is a mutually beneficial transaction between two consenting adults, and that prohibiting it would itself be a violation of Articles 3 and 29 of the Universal Declaration of Human Rights.
Even within developed countries there is concern that enthusiasm for increasing the supply of organs may trample on respect for the right to life. The question is made even more complicated by the fact that the "irreversibility" criterion for legal death cannot be adequately defined and can easily change with changing technology.
Artificial organ transplantation
Surgeons, notably Paolo Macchiarini, in Sweden performed the first implantation of a synthetic trachea in July 2011, for a 36-year-old patient who had cancer. Stem cells taken from the patient's hip were treated with growth factors and incubated on a plastic replica of his natural trachea.
According to information uncovered by the Swedish documentary "Dokument Inifrån: Experimenten" (Swedish: "Documents from the Inside: The Experiments") the patient, Andemariam went on to develop an increasingly terrible and eventually bloody cough to dying, incubated, in the hospital. At that point, determined by autopsy, 90% of the synthetic windpipe had come loose. He allegedly made several trips to see Macchiarini for his complications, and at one point had surgery again to have his synthetic windpipe replaced, but Macchiarini was notoriously difficult to get an appointment with. According to the autopsy, the old synthetic windpipe did not appear to have been replaced.
Macchiarini's academic credentials have been called into question and he has recently been accused of alleged research misconduct.
Left ventricular assist devices are often used as a "bridge" to provide additional time while a patient waits for a transplant. For example, former US vice-president Dick Cheney had such a device implanted in 2010 and then 20 months later received a heart transplant in 2012. In year 2012, about 3,000 ventricular assist devices were inserted in the United States, as compared to approximately 2,500 heart transplants. The use of airbags in cars as well as greater use of helmets by bicyclists and skiers has reduced the number of persons with fatal head injuries, which is a common source of donors hearts.
Research
An early-stage medical laboratory and research company, called Organovo, designs and develops functional, three dimensional human tissue for medical research and therapeutic applications. The company utilizes its NovoGen MMX Bioprinter for 3D bioprinting. Organovo anticipates that the bioprinting of human tissues will accelerate the preclinical drug testing and discovery process, enabling treatments to be created more quickly and at lower cost. Additionally, Organovo has long-term expectations that this technology could be suitable for surgical therapy and transplantation.
A further area of active research is concerned with improving and assessing organs during their preservation. Various techniques have emerged which show great promise, most of which involve perfusing the organ under either hypothermic (4–10 °C) or normothermic (37 °C) conditions. All of these add additional cost and logistical complexity to the organ retrieval, preservation and transplant process, but early results suggest it may well be worth it. Hypothermic perfusion is in clinical use for transplantation of kidneys and liver whilst normothermic perfusion has been used effectively in the heart, lung, liver and, less so, in the kidney.
Another area of research being explored is the use of genetically engineered animals for transplants. Similar to human organ donors, scientists have developed a genetically engineered pig with the aim of reducing rejection to pig organs by human patients. This is currently at the basic research stage, but shows great promise in alleviating the long waiting lists for organ transplants and the number of people in need of transplants outweighs the amount of organs donated. Trials are being done to prevent the pig organ transplant to enter a clinical trial phase until the potential disease transfer from pigs to humans can be safely and satisfactorily managed (Isola & Gordon, 1991).
Negative effects of transplantation
The National Library of Medicine published a six-part study on quality of life after Transplantation. The chapters, in order, are as follows:
1. Introduction, which introduces the study.
2. Solid Organ Transplantation in the United States and the Experiences of Organ Recipients and Their Caregivers, which explains what life is like after receiving a transplant.
3.Organ Transplantation and Disability in Adults, which explores quality of life for adults subsequent to transplantation.
4. Organ Transplantation and Disability in Children and Adolescents, which explores quality of life for minors subsequent to transplantation.
5. Treatments, Technologies, and Interventions Affecting Function After Transplantation, which explores ways to treat complications after transplantation.
6. Future Outlook for Organ Transplantation and Disability, which concludes the study.
In the fourth chapter about pediatric transplantation, Nitika Gupta, Eyal Shemesh, George Mazariegos, Dorry Segev, and other researchers discuss outcomes in young transplant recipients. Numbers of pediatric intestine transplants are declining as treatments for intestine disease are increasing. There is a higher number of pediatric liver transplants than intestine transplants. Adolescent kidney recipients are more likely to be diagnosed with ADHD and other mental disorders such as depression and anxiety following transplantation, living with their parents and experiencing unemployment as adults, and having poor grades in school. One in three adolescents have experienced nonadherence (refusal to follow advice from doctors). Similarly, they were also more likely to commit suicide and abuse substances. Dr. Clifford Chin explains his opinion that rather than being a cure, heart transplantation creates a chronic illness with a plethora of adverse side effects, such as developmental delay, limited ability to participate in everyday activities, and impaired cognitive function, which may suggest an arrested development, but hepatologist Saeed Mohammad later explains how lack of proper oxygen levels may effect intellectual ability following the transplant.
Saeed Mohammad also discussed the correlation between developmental milestones and pediatric transplantation in general. He considers pediatric transplant recipients to be chronically ill, even though the transplants cured their illnesses. He explains how children who had received transplants are often underestimated, but also points out that immunosuppressive therapy can affect brain development.
Nitika Gupta, a pediatric hepatologist, explains that an estimated 750,000 eighteen-year-olds live with long-lasting health issues, which is equivalent to just under a quarter of people. Patients who had received transplants between the ages of eleven and seventeen had lower survival rates, especially if they had their transplant between the ages of sixteen and seventeen, than compared to those who had received organ transplants when they were under five years old. She also points out that teenagers' brains are still forming and developing, which can have critical effects on patients.
It should be pointed out that the researchers refer to pediatric transplant recipients as chronically ill, special needs, and affected by chronic health conditions, though transplantation is a medical operation, rather than a diagnosable condition.
It was also revealed that in young liver transplant recipients, nonadherence was more common in girls, patients living in single-parent homes, and patients nineteen and older.
Myths
There are myths that transplantation, regardless of organ, leads to infertility, obsessive-compulsive disorder, and avoidance. In reality, females can often get pregnant, and most patients don't experience avoidant behavior
| Biology and health sciences | Medical procedures | null |
167184 | https://en.wikipedia.org/wiki/Rapid%20eye%20movement%20sleep | Rapid eye movement sleep | Rapid eye movement sleep (REM sleep or REMS) is a unique phase of sleep in mammals (including humans) and birds, characterized by random rapid movement of the eyes, accompanied by low muscle tone throughout the body, and the propensity of the sleeper to dream vividly. The core body and brain temperatures increase during REM sleep and skin temperature decreases to lowest values.
The REM phase is also known as paradoxical sleep (PS) and sometimes desynchronized sleep or dreamy sleep, because of physiological similarities to waking states including rapid, low-voltage desynchronized brain waves. Electrical and chemical activity regulating this phase seem to originate in the brain stem, and is characterized most notably by an abundance of the neurotransmitter acetylcholine, combined with a nearly complete absence of monoamine neurotransmitters histamine, serotonin and norepinephrine. Experiences of REM sleep are not transferred to permanent memory due to absence of norepinephrine.
REM sleep is physiologically different from the other phases of sleep, which are collectively referred to as non-REM sleep (NREM sleep, NREMS, synchronized sleep). The absence of visual and auditory stimulation (sensory deprivation) during REM sleep can cause hallucinations. REM and non-REM sleep alternate within one sleep cycle, which lasts about 90 minutes in adult humans. As sleep cycles continue, they shift towards a higher proportion of REM sleep. The transition to REM sleep brings marked physical changes, beginning with electrical bursts called "ponto-geniculo-occipital waves" (PGO waves) originating in the brain stem. REM sleep occurs 4 times in a 7-hour sleep. Organisms in REM sleep suspend central homeostasis, allowing large fluctuations in respiration, thermoregulation and circulation which do not occur in any other modes of sleeping or waking. The body abruptly loses muscle tone, a state known as REM atonia.
In 1953, Professor Nathaniel Kleitman and his student Eugene Aserinsky defined rapid eye movement and linked it to dreams. REM sleep was further described by researchers, including William Dement and Michel Jouvet. Many experiments have involved awakening test subjects whenever they begin to enter the REM phase, thereby producing a state known as REM deprivation. Subjects allowed to sleep normally again usually experience a modest REM rebound. Techniques of neurosurgery, chemical injection, electroencephalography, positron emission tomography, and reports of dreamers upon waking have all been used to study this phase of sleep.
Physiology
Electrical activity in the brain
REM sleep is called "paradoxical" because of its similarities to wakefulness. Although the body is paralyzed, the brain acts as if it is somewhat awake, with cerebral neurons firing with the same overall intensity as in wakefulness. Electroencephalography during REM deep sleep reveals fast, low amplitude, desynchronized neural oscillation (brainwaves) that resemble the pattern seen during wakefulness, which differ from the slow δ (delta) waves pattern of NREM deep sleep. An important element of this contrast is the 3–10 Hz theta rhythm in the hippocampus and 40–60 Hz gamma waves in the cortex; patterns of EEG activity similar to these rhythms are also observed during wakefulness. The cortical and thalamic neurons in the waking and REM sleeping brain are more depolarized (fire more readily) than in the NREM deep sleeping brain. Human theta wave activity predominates during REM sleep in both the hippocampus and the cortex.
During REM sleep, electrical connectivity among different parts of the brain manifests differently than during wakefulness. Frontal and posterior areas are less coherent in most frequencies, a fact which has been cited in relation to the chaotic experience of dreaming. However, the posterior areas are more coherent with each other; as are the right and left hemispheres of the brain, especially during lucid dreams.
Brain energy use in REM sleep, as measured by oxygen and glucose metabolism, equals or exceeds energy use in waking. The rate in non-REM sleep is 11–40% lower.
Brain stem
Neural activity during REM sleep seems to originate in the brain stem, especially the pontine tegmentum and locus coeruleus. REM sleep is punctuated and immediately preceded by PGO (ponto-geniculo-occipital) waves, bursts of electrical activity originating in the brain stem. (PGO waves have long been measured directly in cats but not in humans because of constraints on experimentation; however, comparable effects have been observed in humans during "phasic" events which occur during REM sleep, and the existence of similar PGO waves is thus inferred.) These waves occur in clusters about every 6 seconds for 1–2 minutes during the transition from deep to paradoxical sleep. They exhibit their highest amplitude upon moving into the visual cortex and are a cause of the "rapid eye movements" in paradoxical sleep. Other muscles may also contract under the influence of these waves.
Forebrain
Research in the 1990s using positron emission tomography (PET) confirmed the role of the brain stem and suggested that, within the forebrain, the limbic and paralimbic systems showed more activation than other areas. The areas activated during REM sleep are approximately inverse to those activated during non-REM sleep and display greater activity than in quiet waking. The "anterior paralimbic REM activation area" (APRA) includes areas linked with emotion, memory, fear and sex, and may thus relate to the experience of dreaming during REMS. More recent PET research has indicated that the distribution of brain activity during REM sleep varies in correspondence with the type of activity seen in the prior period of wakefulness.
The superior frontal gyrus, medial frontal areas, intraparietal sulcus, and superior parietal cortex, areas involved in sophisticated mental activity, show equal activity in REM sleep as in wakefulness. The amygdala is also active during REM sleep and may participate in generating the PGO waves, and experimental suppression of the amygdala results in less REM sleep. The amygdala may also regulate cardiac function in lieu of the less active insular cortex.
Chemicals in the brain
Compared to slow-wave sleep, both waking and paradoxical sleep involve higher use of the neurotransmitter acetylcholine, which may cause the faster brainwaves. The monoamine neurotransmitters norepinephrine, serotonin and histamine are completely unavailable. Injections of acetylcholinesterase inhibitor, which effectively increases available acetylcholine, have been found to induce paradoxical sleep in humans and other animals already in slow-wave sleep. Carbachol, which mimics the effect of acetylcholine on neurons, has a similar influence. In waking humans, the same injections produce paradoxical sleep only if the monoamine neurotransmitters have already been depleted.
Two other neurotransmitters, orexin and gamma-Aminobutyric acid (GABA), seem to promote wakefulness, diminish during deep sleep, and inhibit paradoxical sleep.
Unlike the abrupt transitions in electrical patterns, the chemical changes in the brain show continuous periodic oscillation.
Models of REM regulation
According to the activation-synthesis hypothesis proposed by Robert McCarley and Allan Hobson in 1975–1977, control over REM sleep involves pathways of "REM-on" and "REM-off" neurons in the brain stem. REM-on neurons are primarily cholinergic (i.e., involve acetylcholine); REM-off neurons activate serotonin and noradrenaline, which among other functions suppress the REM-on neurons. McCarley and Hobson suggested that the REM-on neurons actually stimulate REM-off neurons, thereby serving as the mechanism for the cycling between REM and non-REM sleep. They used Lotka–Volterra equations to describe this cyclical inverse relationship. Kayuza Sakai and Michel Jouvet advanced a similar model in 1981. Whereas acetylcholine manifests in the cortex equally during wakefulness and REM, it appears in higher concentrations in the brain stem during REM. The withdrawal of orexin and GABA may cause the absence of the other excitatory neurotransmitters; researchers in recent years increasingly include GABA regulation in their models.
Eye movements
Most of the eye movements in "rapid eye movement" sleep are in fact less rapid than those normally exhibited by waking humans. They are also shorter in duration and more likely to loop back to their starting point. About seven such loops take place over one minute of REM sleep. In slow-wave sleep, the eyes can drift apart; however, the eyes of the paradoxical sleeper move in tandem. These eye movements follow the ponto-geniculo-occipital waves originating in the brain stem. The eye movements themselves may relate to the sense of vision experienced in the dream, but a direct relationship remains to be clearly established. Congenitally blind people, who do not typically have visual imagery in their dreams, still move their eyes in REM sleep. An alternative explanation suggests that the functional purpose of REM sleep is for procedural memory processing, and the rapid eye movement is only a side effect of the brain processing the eye-related procedural memory.
Circulation, respiration, and thermoregulation
Generally speaking, the body suspends homeostasis during paradoxical sleep. Heart rate, cardiac pressure, cardiac output, arterial pressure, and breathing rate quickly become irregular when the body moves into REM sleep. In general, respiratory reflexes such as response to hypoxia diminish. Overall, the brain exerts less control over breathing; electrical stimulation of respiration-linked brain areas does not influence the lungs, as it does during non-REM sleep and in waking.
Erections of the penis (nocturnal penile tumescence or NPT) normally accompany REM sleep in rats and humans. If a male has erectile dysfunction (ED) while awake, but has NPT episodes during REM, it would suggest that the ED is from a psychological rather than a physiological cause. In females, erection of the clitoris (nocturnal clitoral tumescence or NCT) causes enlargement, with accompanying vaginal blood flow and transudation (i.e. lubrication). During a normal night of sleep, the penis and clitoris may be erect for a total time of from one hour to as long as three and a half hours during REM.
Body temperature is not well regulated during REM sleep, and thus organisms become more sensitive to temperatures outside their thermoneutral zone. Cats and other small furry mammals will shiver and breathe faster to regulate temperature during NREMS—but not during REMS. With the loss of muscle tone, animals lose the ability to regulate temperature through body movement. (However, even cats with pontine lesions preventing muscle atonia during REM did not regulate their temperature by shivering.) Neurons that typically activate in response to cold temperatures—triggers for neural thermoregulation—simply do not fire during REM sleep, as they do in NREM sleep and waking.
Consequently, hot or cold environmental temperatures can reduce the proportion of REM sleep, as well as amount of total sleep. In other words, if at the end of a phase of deep sleep, the organism's thermal indicators fall outside of a certain range, it will not enter paradoxical sleep lest deregulation allow temperature to drift further from the desirable value. This mechanism can be 'fooled' by artificially warming the brain.
Muscle
REM atonia, an almost complete paralysis of the body, is accomplished through the inhibition of motor neurons. When the body shifts into REM sleep, motor neurons throughout the body undergo a process called hyperpolarization: their already-negative membrane potential decreases by another 2–10 millivolts, thereby raising the threshold which a stimulus must overcome to excite them. Muscle inhibition may result from unavailability of monoamine neurotransmitters (restraining the abundance of acetylcholine in the brainstem) and perhaps from mechanisms used in waking muscle inhibition. The medulla oblongata, located between pons and spine, seems to have the capacity for organism-wide muscle inhibition. Some localized twitching and reflexes can still occur. Pupils contract.
Lack of REM atonia causes REM behavior disorder, where those affected physically act out their dreams, or conversely "dream out their acts", under an alternative theory on the relationship between muscle impulses during REM and associated mental imagery (which would also apply to people without the condition, except that commands to their muscles are suppressed). This is different from conventional sleepwalking, which takes place during slow-wave sleep, not REM. Narcolepsy, by contrast, seems to involve excessive and unwanted REM atonia: cataplexy and excessive daytime sleepiness while awake, hypnagogic hallucinations before entering slow-wave sleep, or sleep paralysis while waking. Other psychiatric disorders including depression have been linked to disproportionate REM sleep. Patients with suspected sleep disorders are typically evaluated by polysomnogram.
Lesions of the pons to prevent atonia have induced functional "REM behavior disorder" in animals.
Psychology
Dreaming
Rapid eye movement sleep (REM) has since its discovery been closely associated with dreaming. Waking up sleepers during a REM phase is a common experimental method for obtaining dream reports; 80% of people can give some kind of dream report under these circumstances. Sleepers awakened from REM tend to give longer, more narrative descriptions of the dreams they were experiencing, and to estimate the duration of their dreams as longer. Lucid dreams are reported far more often in REM sleep. (In fact these could be considered a hybrid state combining essential elements of REM sleep and waking consciousness.) The mental events which occur during REM most commonly have dream hallmarks including narrative structure, convincingness (e.g., experiential resemblance to waking life), and incorporation of instinctual themes. Sometimes, they include elements of the dreamer's recent experience taken directly from episodic memory. By one estimate, 80% of dreams occur during REM.
Hobson and McCarley proposed that the PGO waves characteristic of "phasic" REM might supply the visual cortex and forebrain with electrical excitement which amplifies the hallucinatory aspects of dreaming. However, people woken up during sleep do not report significantly more bizarre dreams during phasic REMS, compared to tonic REMS. Another possible relationship between the two phenomena could be that the higher threshold for sensory interruption during REM sleep allows the brain to travel further along unrealistic and peculiar trains of thought.
Some dreaming can take place during non-REM sleep. "Light sleepers" can experience dreaming during stage 2 non-REM sleep, whereas "deep sleepers", upon awakening in the same stage, are more likely to report "thinking" but not "dreaming". Certain scientific efforts to assess the uniquely bizarre nature of dreams experienced while asleep were forced to conclude that waking thought could be just as bizarre, especially in conditions of sensory deprivation. Because of non-REM dreaming, some sleep researchers have strenuously contested the importance of connecting dreaming to the REM sleep phase. The prospect that well-known neurological aspects of REM do not themselves cause dreaming suggests the need to re-examine the neurobiology of dreaming per se. Some researchers (Dement, Hobson, Jouvet, for example) tend to resist the idea of disconnecting dreaming from REM sleep.
Effects of SSRIs
Previous research has shown that selective serotonin reuptake inhibitors (SSRIs) have an important effect on REM sleep neurobiology and dreaming. A study at Harvard Medical School in 2000 tested the effects of paroxetine and fluvoxamine on healthy young adult male and females for 31 days: a drug-free baseline week, 19 days on either paroxetine or fluvoxamine with morning and evening doses, and 5 days of absolute discontinuation. Results showed that SSRI treatment decreased the average amount of dream recall frequency in comparison to baseline measurements as a result of serotonergic REM suppression. Fluvoxamine increased the length of dream reporting, bizarreness of dreams as well as the intensity of REM sleep. These effects were the greatest during acute discontinuation compared to treatment and baseline days. However, the subjective intensity of dreaming increased and the proclivity to enter REM sleep was decreased during SSRI treatment compared to baseline and discontinuation days.
Creativity
After waking from REM sleep, the mind seems "hyperassociative"—more receptive to semantic priming effects. People awakened from REM have performed better on tasks like anagrams and creative problem solving.
Sleep aids the process by which creativity forms associative elements into new combinations that are useful or meet some requirement. This occurs in REM sleep rather than in NREM sleep. Rather than being due to memory processes, this has been attributed to changes during REM sleep in cholinergic and noradrenergic neuromodulation. High levels of acetylcholine in the hippocampus suppress feedback from hippocampus to the neocortex, while lower levels of acetylcholine and norepinephrine in the neocortex encourage the uncontrolled spread of associational activity within neocortical areas. This is in contrast to waking consciousness, where higher levels of norepinephrine and acetylcholine inhibit recurrent connections in the neocortex. REM sleep through this process adds creativity by allowing "neocortical structures to reorganise associative hierarchies, in which information from the hippocampus would be reinterpreted in relation to previous semantic representations or nodes."
Timing
In the ultradian sleep cycle, an organism alternates between deep sleep (slow, large, synchronized brain waves) and paradoxical sleep (faster, desynchronized waves). Sleep happens in the context of the larger circadian rhythm, which influences sleepiness and physiological factors based on timekeepers within the body. Sleep can be distributed throughout the day or clustered during one part of the rhythm: in nocturnal animals, during the day, and in diurnal animals, at night. The organism returns to homeostatic regulation almost immediately after REM sleep ends.
During a night of sleep, humans usually experience about four or five periods of REM sleep; they are shorter (~15 min) at the beginning of the night and longer (~25 min) toward the end. Many animals and some people tend to wake, or experience a period of very light sleep, for a short time immediately after a bout of REM. The relative amount of REM sleep varies considerably with age. A newborn baby spends more than 80% of total sleep time in REM.
REM sleep typically occupies 20–25% of total sleep in adult humans: about 90–120 minutes of a night's sleep. The first REM episode occurs about 70 minutes after falling asleep. Cycles of about 90 minutes each follow, with each cycle including a larger proportion of REM sleep. (The increased REM sleep later in the night is connected with the circadian rhythm and occurs even in people who did not sleep in the first part of the night.)
In the weeks after a human baby is born, as its nervous system matures, neural patterns in sleep begin to show a rhythm of REM and non-REM sleep. (In faster-developing mammals, this process occurs in utero.) Infants spend more time in REM sleep than adults. The proportion of REM sleep then decreases significantly in childhood. Older people tend to sleep less overall, but sleep in REM for about the same absolute time (and therefore spend a greater proportion of sleep in REM).
Rapid eye movement sleep can be subclassified into tonic and phasic modes. Tonic REM is characterized by theta rhythms in the brain; phasic REM is characterized by PGO waves and actual "rapid" eye movements. Processing of external stimuli is heavily inhibited during phasic REM, and recent evidence suggests that sleepers are more difficult to arouse from phasic REM than in slow-wave sleep.
Deprivation effects
Selective REMS deprivation causes a significant increase in the number of attempts to go into REM stage while asleep. On recovery nights, an individual will usually move to stage 3 and REM sleep more quickly and experience a REM rebound, which refers to an increase in the time spent in REM stage over normal levels. These findings are consistent with the idea that REM sleep is biologically necessary. However, the "rebound" REM sleep usually does not last fully as long as the estimated length of the missed REM periods.
After the deprivation is complete, mild psychological disturbances, such as anxiety, irritability, hallucinations, and difficulty concentrating may develop and appetite may increase. There are also positive consequences of REM deprivation. Some symptoms of depression are found to be suppressed by REM deprivation; aggression may increase, and eating behavior may get disrupted. Higher norepinepherine is a possible cause of these results. Whether and how long-term REM deprivation has psychological effects remains a matter of controversy. Several reports have indicated that REM deprivation increases aggression and sexual behavior in laboratory test animals. Rats deprived of paradoxical sleep die in 4–6 weeks (twice the time before death in case of total sleep deprivation). Mean body temperature falls continually during this period.
It has been suggested that acute REM sleep deprivation can improve certain types of depression—when depression appears to be related to an imbalance of certain neurotransmitters. Although sleep deprivation in general annoys most of the population, it has repeatedly been shown to alleviate depression, albeit temporarily. More than half the individuals who experience this relief report it to be rendered ineffective after sleeping the following night. Thus, researchers have devised methods such as altering the sleep schedule for a span of days following a REM deprivation period and combining sleep-schedule alterations with pharmacotherapy to prolong this effect. Antidepressants (including selective serotonin reuptake inhibitors, tricyclics, and monoamine oxidase inhibitors) and stimulants (such as amphetamine, methylphenidate and cocaine) interfere with REM sleep by stimulating the monoamine neurotransmitters which must be suppressed for REM sleep to occur. Administered at therapeutic doses, these drugs may stop REM sleep entirely for weeks or months. Withdrawal causes a REM rebound. Sleep deprivation stimulates hippocampal neurogenesis much as antidepressants do, but whether this effect is driven by REM sleep in particular is unknown.
In other animals
Although it manifests differently in different animals, REM sleep or something like it occurs in all land mammals—as well as in birds. The primary criteria used to identify REM are the change in electrical activity, measured by EEG, and loss of muscle tone, interspersed with bouts of twitching in phasic REM.
The amount of REM sleep and cycling varies among animals; predators experience more REM sleep than prey. Larger animals also tend to stay in REM for longer, possibly because higher thermal inertia of their brains and bodies allows them to tolerate longer suspension of thermoregulation. The period (full cycle of REM and non-REM) lasts for about 90 minutes in humans, 22 minutes in cats, and 12 minutes in rats. In utero, mammals spend more than half (50–80%) of a 24-hour day in REM sleep.
Sleeping reptiles do not seem to have PGO waves or the localized brain activation seen in mammalian REM. However, they do exhibit sleep cycles with phases of REM-like electrical activity measurable by EEG. A recent study found periodic eye movements in the central bearded dragon of Australia, leading its authors to speculate that the common ancestor of amniotes may therefore have manifested some precursor to REMS.
Observations of jumping spiders in their nocturnal resting position also suggest a REM sleep-like state characterized by bouts of twitching and retinal movements and hints of muscle atonia (legs curling up as a result of pressure loss caused by muscle atonia in the prosoma).
Sleep deprivation experiments on non-human animals can be set up differently than those on humans. The "flower pot" method involves placing a laboratory animal above water on a platform so small that it falls off upon losing muscle tone. The naturally rude awakening which results may elicit changes in the organism which necessarily exceed the simple absence of a sleep phase. This method also stops working after about 3 days as the subjects (typically rats) lose their will to avoid the water. Another method involves computer monitoring of brain waves, complete with automatic mechanized shaking of the cage when the test animal drifts into REM sleep.
Possible functions
Some researchers argue that the perpetuation of a complex brain process such as REM sleep indicates that it serves an important function for the survival of mammalian and avian species. It fulfills important physiological needs vital for survival to the extent that prolonged REM sleep deprivation leads to death in experimental animals. In both humans and experimental animals, REM sleep loss leads to several behavioral and physiological abnormalities. Loss of REM sleep has been noticed during various natural and experimental infections. Survivability of the experimental animals decreases when REM sleep is totally attenuated during infection; this leads to the possibility that the quality and quantity of REM sleep is generally essential for normal body physiology. Further, the existence of a "REM rebound" effect suggests the possibility of a biological need for REM sleep.
While the precise function of REM sleep is not well understood, several theories have been proposed.
Memory
Sleep in general aids memory. REM sleep may favor the preservation of certain types of memories: specifically, procedural memory, spatial memory, and emotional memory. In rats, REM sleep increases following intensive learning, especially several hours after, and sometimes for multiple nights. Experimental REM sleep deprivation has sometimes inhibited memory consolidation, especially regarding complex processes (e.g., how to escape from an elaborate maze). In humans, the best evidence for REM's improvement of memory pertains to learning of procedures—new ways of moving the body (such as trampoline jumping), and new techniques of problem solving. REM deprivation seemed to impair declarative (i.e., factual) memory only in more complex cases, such as memories of longer stories. REM sleep apparently counteracts attempts to suppress certain thoughts.
According to the dual-process hypothesis of sleep and memory, the two major phases of sleep correspond to different types of memory. "Night half" studies have tested this hypothesis with memory tasks either begun before sleep and assessed in the middle of the night, or begun in the middle of the night and assessed in the morning. Slow-wave sleep, part of non-REM sleep, appears to be important for declarative memory. Artificial enhancement of the non-REM sleep improves the next-day recall of memorized pairs of words. Tucker et al. demonstrated that a daytime nap containing solely non-REM sleep enhances declarative memory—but not procedural memory. According to the sequential hypothesis, the two types of sleep work together to consolidate memory.
Sleep researcher Jerome Siegel has observed that extreme REM deprivation does not significantly interfere with memory. One case study of an individual who had little or no REM sleep due to a shrapnel injury to the brainstem did not find the individual's memory to be impaired. Antidepressants, which suppress REM sleep, show no evidence of impairing memory and may improve it.
Graeme Mitchison and Francis Crick proposed in 1983 that by virtue of its inherent spontaneous activity, the function of REM sleep "is to remove certain undesirable modes of interaction in networks of cells in the cerebral cortex"—a process they characterize as "unlearning". As a result, those memories which are relevant (whose underlying neuronal substrate is strong enough to withstand such spontaneous, chaotic activation) are further strengthened, whilst weaker, transient, "noise" memory traces disintegrate. Memory consolidation during paradoxical sleep is specifically correlated with the periods of rapid eye movement, which do not occur continuously. One explanation for this correlation is that the PGO electrical waves, which precede the eye movements, also influence memory. REM sleep could provide a unique opportunity for "unlearning" to occur in the basic neural networks involved in homeostasis, which are protected from this "synaptic downscaling" effect during deep sleep.
Neural ontogeny
REM sleep prevails most after birth, and diminishes with age. According to the "ontogenetic hypothesis", REM (also known in neonates as active sleep) aids the developing brain by providing the neural stimulation that newborns need to form mature neural connections. Sleep deprivation studies have shown that deprivation early in life can result in behavioral problems, permanent sleep disruption, and decreased brain mass. The strongest evidence for the ontogenetic hypothesis comes from experiments on REM deprivation, and from the development of the visual system in the lateral geniculate nucleus and primary visual cortex.
Defensive immobilization
Ioannis Tsoukalas of Stockholm University has hypothesized that REM sleep is an evolutionary transformation of a well-known defensive mechanism, the tonic immobility reflex. This reflex, also known as animal hypnosis or death feigning, functions as the last line of defense against an attacking predator and consists of the total immobilization of the animal so that it appears dead. Tsoukalas argues that the neurophysiology and phenomenology of this reaction shows striking similarities to REM sleep; for example, both reactions exhibit brainstem control, cholinergic neurotransmission, paralysis, hippocampal theta rhythm, and thermoregulatory changes.
Shift of gaze
According to "scanning hypothesis", the directional properties of REM sleep are related to a shift of gaze in dream imagery. Against this hypothesis is that such eye movements occur in those born blind and in fetuses in spite of lack of vision. Also, binocular REMs are non-conjugated (i.e., the two eyes do not point in the same direction at a time) and so lack a fixation point. In support of this theory, research finds that in goal-oriented dreams, eye gaze is directed towards the dream action, determined from correlations in the eye and body movements of REM sleep behavior disorder patients who enact their dreams.
Oxygen supply to cornea
Dr. David M. Maurice, an eye specialist and former adjunct professor at Columbia University, proposed that REM sleep was associated with oxygen supply to the cornea, and that aqueous humor, the liquid between cornea and iris, was stagnant if not stirred. Among the supportive evidence, he calculated that if aqueous humor was stagnant, oxygen from the iris had to reach the cornea by diffusion through aqueous humor, which was not sufficient. According to the theory, when the organism is awake, eye movement (or cool environmental temperature) enables the aqueous humor to circulate. When the organism is sleeping, REM provides the much needed stir to aqueous humor. This theory is consistent with the observation that fetuses, as well as eye-sealed newborn animals, spend much time in REM sleep, and that during a normal sleep, a person's REM sleep episodes become progressively longer deeper into the night. However, owls experience REM sleep, but do not move their head more than in non-REM sleep and it is well known that owls' eyes are nearly immobile.
Other theories
Another theory suggests that monoamine shutdown is required so that the monoamine receptors in the brain can recover to regain full sensitivity.
The sentinel hypothesis of REM sleep was put forward by Frederick Snyder in 1966. It is based upon the observation that REM sleep in several mammals (the rat, the hedgehog, the rabbit, and the rhesus monkey) is followed by a brief awakening. This does not occur for either cats or humans, although humans are more likely to wake from REM sleep than from NREM sleep. Snyder hypothesized that REM sleep activates an animal periodically, to scan the environment for possible predators. This hypothesis does not explain the muscle paralysis of REM sleep; however, a logical analysis might suggest that the muscle paralysis exists to prevent the animal from fully waking up unnecessarily, and allowing it to return easily to deeper sleep.
Jim Horne, a sleep researcher at Loughborough University, has suggested that REM in modern humans compensates for the reduced need for wakeful food foraging.
Other theories are that REM sleep warms the brain, stimulates and stabilizes the neural circuits that have not been activated during waking, or creates internal stimulation to aid development of the CNS; while some argue that REM lacks any purpose, and simply results from random brain activation.
Furthermore, eye movements are also theorized to play a role in certain psychotherapies such as eye movement desensitization and reprocessing (EMDR).
| Biology and health sciences | Ethology | Biology |
167248 | https://en.wikipedia.org/wiki/Recumbent%20bicycle | Recumbent bicycle | A recumbent bicycle is a bicycle that places the rider in a laid-back reclining position. Recumbents are available in a wide range of configurations, including: long to short wheelbase; large, small, or a mix of wheel sizes; overseat, underseat, or no-hands steering; and rear wheel or front wheel drive. A variant with three wheels is a recumbent tricycle.
Recumbents are much faster than upright bicycles, but they were banned by the Union Cycliste Internationale (UCI) in 1934. Recumbent races and records are now overseen by the World Human Powered Vehicle Association (WHPVA) and International Human Powered Vehicle Association (IHPVA).
Some recumbent riders may choose this type of design for ergonomic reasons: the rider's weight is distributed comfortably over a larger area, supported by back and buttocks. On a traditional upright bicycle, the body weight rests entirely on a small portion of the sitting bones, the feet, and the hands. Others may choose a recumbent because some models also have an aerodynamic advantage; the reclined, legs-forward position of the rider's body presents a smaller frontal profile.
Description
Recumbents can be categorized by their wheelbase, wheel sizes, steering system, faired or unfaired, and front-wheel or rear-wheel drive.
Wheelbase
Long-wheelbase (LWB) models have the pedals located between the front and rear wheels; short-wheelbase (SWB) models have the pedals in front of the front wheel; compact long-wheelbase (CLWB) models have the pedals either very close to the front wheel or above it. Within these categories are variations, intermediate types, and even convertible designs (LWB to CLWB) there is no "standard" recumbent.
Wheel sizes
The rear wheel of a recumbent is usually behind the rider and may be any size, from around to the 700c (or 27″ on some older models, as on upright road bikes of that time) of an upright racing cycle. The front wheel is commonly smaller than the rear, although a number of recumbents feature dual 26-inch (ISO 559), 650c (ISO 571), 700c (ISO 622), or even 29 × 4″ oversize all-terrain tires. Given the higher rolling resistance of the smaller front wheel, loss of steering and control are somewhat more likely attempting sharp or quick changes of direction while crossing over patches of loose dirt, sand or pebbles. Larger diameter wheels generally have lower rolling resistance but a higher profile leading to higher air resistance. High-racer aficionados also claim that they are more stable, and although it is easier to balance a bicycle with a higher center of mass, the wide variety of recumbent designs makes such generalizations unreliable. Another advantage of both wheels being the same size is that the bike requires only one size of inner tube.
One common arrangement is an ISO 559 (26-inch) rear wheel and an ISO 406 or ISO 451 (20-inch) front wheel. The small front wheel and large rear wheel combination is used to keep the pedals and front wheel clear of each other, avoiding the problem on a short wheelbase recumbent called "heel strike" (where the rider's heels catch the wheel in tight turns). A pivoting-boom front-wheel drive (PBFWD aka moving bottom bracket recumbent) configuration also overcomes heel strike since the pedals and front wheel turn together. PBFWD bikes may have dual wheels or larger.
Steering
Steering for recumbent bikes can be generally categorized as
over-seat (OSS) or above seat steering (ASS);
under-seat (USS); or
center steering or pivot steering.
OSS/ASS is generally direct—the steerer acts on the front fork like a standard bicycle handlebar—but the bars themselves may extend well behind the front wheel (more like a tiller); alternatively the bars might have long rearward extensions (sometimes known as Superman or Kingcycle bars). Chopper-style bars are sometimes seen on LWB bikes.
USS is usually indirect—the bars link to the headset through a system of rods or cables and possibly a bell crank. Most tadpole trikes are USS.
Center steered or pivot steered recumbents, such as Flevobikes and Pythons, may have no handlebars at all.
In addition, some trikes such as the Sidewinder have used rear-wheel steer, instead of the more common front-wheel steer. They can provide good maneuverability at low speeds, but have been reported to be potentially unstable at speeds above .
Drive
Most recumbents have the cranks attached to a boom fixed to the frame, with a long drive chain for rear wheel drive. However, due to the proximity of the crank to the front wheel, front wheel drive (FWD) can be an option, and it allows for a much shorter chain. One style requires the chain to twist slightly to allow for steering.
Another style, pivoting-boom FWD (PBFWD), has the crankset connected to and moving with the front fork. In addition to the much shorter chain, the advantages to PBFWD are use of a larger front wheel for lower rolling resistance without heel strike (you can pedal while turning) and use of the upper body when sprinting or climbing. The main disadvantage to all FWD designs is "wheelspin" when climbing steep hills covered with loose gravel, wet grass, etc. This mainly affects off-road riders, and can be ameliorated by shifting the weight forward, applying steady pressure to the pedals, and using tires with more aggressive tread. Another disadvantage of PBFWD for some riders is a slightly longer "learning curve" due to adaptation to the pedal-steer effect (forces applied to the pedal can actually steer the bike). Beginner riders tend to swerve along a serpentine path until they adapt a balanced pedal motion. After adaptation, a PBFWD recumbent can be ridden in as straight a line as any other bike, and can even be steered accurately with the feet only. Cruzbike is the only PBFWD recumbent currently in production, and features a traditional steering axis similar to most standard and recumbent bikes. Flevobike formerly produced a center-steered FWD bike similar to the Python Lowracer.
Yet another drive-train variation is on rowing cycles where the rider rows using arms and legs.
Fully suspended bikes
Modern recumbent bikes are increasingly being fitted with front and rear suspension systems for increased comfort and traction on rough surfaces. Coil, elastomer, and air-sprung suspension systems have all been used on recumbent bikes, with oil or air-damping in the forks and rear shock absorbers. The maturation of fully suspended conventional mountain bikes has aided the development of these designs, which often use many of the same parts, suitably modified for recumbent use.
Fairings
Some riders fit their bikes with aerodynamic devices called fairings. These can reduce aerodynamic drag and help keep the rider warmer and drier in cold and wet weather. Fairings are also available for upright bikes, but are much less common. Fully enclosed bikes and trikes are considered velomobiles.
Seats
The seats themselves are either of mesh stretched tightly over a frame or foam cushions over hard shells like the Stinger pictured, which might be moulded or assembled from sheet materials. Hard-shell seats predominate in Europe, mesh seats in the USA.
Variations
Mountain bike recumbents
With the right equipment and design, recumbent bikes can be used for riding unpaved roads and offroad, just as with conventional mountain bikes. Because of their longer wheelbase and the manner in which the rider is confined to the seat, recumbents are not as easy to use on tight, curving unpaved singletrack. Large-diameter wheels, mountain gearing and off-road specific design have been used since 1999. Crank-forward designs that facilitate climbing out of the saddle, such as the RANS Dynamik, also can be used off-road.
Lowracers
Lowracers are a type of recumbent more common in Europe among racing enthusiasts. These typically have two 20″ wheels or a 26″ wheel at the rear and 20″ wheel at the front. The seat is positioned between the wheels rather than above them. The extreme reclined position, and the fact that the rider is sitting in line with the wheels rather than atop them, makes this type the most aerodynamic of unfaired recumbents.
Highracers
Highracers are distinguished by using two large wheels (usually ISO 559, 650c or 700c). This necessitates a higher bottom bracket than on a lowracer so that the rider's legs are above the front wheel, and this in turn requires a higher seat. The seating position may be otherwise identical to that on a lowracer allowing similar aerodynamics. "Racer" in the name implies that this will often be the case, since these bikes strive for speed.
Highracers are generally more maneuverable than lowracers since their higher center of mass make them easier to balance at lower speeds. Given the same seating position they may be faster than lowracers, since it is widely believed that rolling resistance is inversely proportional to wheel diameter. However, lowracer proponents reply that their design is faster due to aerodynamics. The reasoning is that the riders body is in line with the wheels, reducing drag.
Hip and elbow injuries are more common on highracers than on lowracers due to the greater height from which the rider can fall. However, the injuries are very rare and seldom serious.
Semi-recumbent and crank forward bicycles
Bicycles that use positions intermediate between a conventional upright and a recumbent are called semi-recumbent or crank forward designs. These generally are intended for casual use and have comfort and ease of use as primary objectives, with aerodynamics sacrificed for this purpose.
Tandem recumbents
Just as with upright bicycles, recumbents are built and marketed with more than one seat, thus combining the advantages of recumbents with those of tandem bicycles. In order to keep the wheelbase from being any longer than absolutely necessary, tandem recumbents often place the stoker's crankset under the captain's seat. A common configuration for two riders in the recumbent position is the sociable tandem, wherein the two riders ride side by side. There are also hybrid recumbent designs such as the Hase Pino Allround that utilize a recumbent stoker in the front, and an upright pilot in the rear.
Recumbent tricycles
Recumbent tricycles (trikes) are closely related to recumbent bicycles, but have three wheels instead of two. The three wheels can be arranged in two ways: delta trikes have one front wheel and two rear wheels, while tadpole trikes have two front wheels and one rear wheel.
Handcycles
In order to accommodate paraplegics and other individuals with little or no use of their legs, many manufacturers have designed and released hand-powered recumbent trikes, or handcycles. Handcycles are a regular sight at human powered vehicle (HPV) meetings and are beginning to be seen on the streets. They usually follow a delta design with front wheels driven by standard dérailleur gearing powered by hand cranks. Brake levers are usually mounted on the hand holds, which are usually set with no offset rather than the 180° of pedal cranks. The entire crank assembly and the front wheel turn together, allowing the rider to steer and crank simultaneously.
Although arms are weaker than legs, many hand cyclists are able to make use of the power of the whole upper body. A good hand cyclist can still achieve a respectable pace in competitions. Handcycles have also been used for touring, though few designers incorporate mudguards or luggage racks. Also, the gear ratios of standard handcycles tend to be less useful for long steep climbs.
Hand-and-foot recumbent tricycles
Recumbent cycles offer the possibility of combined hand and foot power inputs, and thus the potential for a full-body workout, and the option for persons with a weak or missing leg(s) to power a cycle. In one recumbent tricycle design the user makes the two front wheels change direction by shifting his center of weight, and moves forward by rotating the rear wheel. There are also hybrids between a handcycle, a recumbent bike and a tricycle; these bikes enable cycling by use of legs, despite a spinal cord injury
Recumbent quadracycles
Recumbent four-wheel cycles have the same general advantages of tricycles. For quadracycles with only one seat the stability improvements of the fourth wheel offer only a marginal advantage over a tadpole recumbent tricycle. More wheels introduce more weight and more complexity. The fourth wheel is only of the most benefit to the single-seat rider when going off-road. When two and sometimes four riders want to ride together in a sociable configuration the four-wheel recumbent cycle is a viable option.
Homebuilts
As with upright bikes, there is a subculture of recumbent builders who design and build home-built recumbents. Often these are assembled of parts from other bikes, particularly mountain bikes. The frame designs may be as simple as a long steel tube bent into the appropriate shape, or as elaborate as hand-built carbon fiber frames. For many builders, the engineering and construction of the bikes is as much of a challenge as riding them.
Folding
Several manufacturers offer folding recumbents to facilitate packing and travelling.
Couplers
It is possible to add couplers either during manufacturing or as a retrofit so that the frame can be disassembled into smaller pieces to facilitate packing and travel.
Stationary recumbents
As well as road-going recumbent bicycles with wheels, stationary versions also exist. These are often found in gyms but are also available for home use. Like a regular stationary exercise bike, these stay in one place and the user pedals against some kind of resistance mechanism such as a fan or alternator but in a recumbent position. These have the same comfort advantages as road-going recumbents. Stationary recumbents almost always have a fairly upright seat and the pedal crank is lower than the level of the seat. The seat is normally adjustable and is adjusted by sliding it along a rail.
Compared to uprights
There are striking differences between recumbents and upright bikes. Since recumbents vary widely, the advantages and disadvantages listed below may apply to different types to different degrees or not at all. (For example, balance is not a concern with tricycles.)
History
Recumbent bicycle designs date back to the middle of the 19th century. Several designs were patented around 1900, but these early designs were unsuccessful.
Early recumbents
Recumbent designs of both prone and supine varieties can be traced back to the earliest days of the bicycle. Before the shape of the bicycle settled down following Starley's safety bicycle, there was a good deal of experimentation with various arrangements, and this included designs which might be considered recumbent. Although these dated back to the 1860s the first recorded illustration of a recumbent considered as a separate class of bicycle is considered to be in the magazine Fliegende Blätter of 10 September 1893. This year also saw what is considered the first genuine recumbent, the Fautenil Vélociped. Patent applications for a number of recumbent designs exist in the late years of the 19th century, and there were discussions in the cycling press of the relative merits of different layouts. The Challand designs of 1897 and the American Brown of 1901 are both recognisable as forerunners of today's recumbents.
The Mochet 'Vélo-Velocar' and 'Vélorizontal'
A four-wheeled, two-seater, pedal-propelled car called the 'Velocar' was built in the 1930s by French inventor and light car builder Charles Mochet. Velocars sold well to French buyers who could not afford a motor car, possibly because of a poor economy during the Great Depression. The four-wheeled Velocars were fast but didn't corner well at high speed. Mochet then experimented with a three-wheel design and finally a mould-breaking two-wheel design based on the Vélocar technology.
The early models of Mochet's 'La bicyclette de l'Avenir' (The bicycle of the Future), the 'Vélo-Vélocar', or 'V-V' as the factory referred to them, used a 40mm steel-tube, single-beam frame and 450 x 55 wheels with handlebars over the rider and steering torque transmitted by bevel gears. Various types of Mochet-designed derailleur gears were fitted, with a single gear for the track models. Gears were mid-mounted using primary and secondary chains. The back-rest was adjustable on more sporting models.
To demonstrate the speed of his recumbent bicycle, Mochet had the design ratified by the UCI and UVF and enlisted cyclist Francis Faure, a Category 2 racer, to ride it in races. Faure was highly successful, defeating many of Europe's top cyclists both on the track and in road races, and setting new world records at short distances. Another cyclist, Paul Morand, won the Paris-Limoges race in 1933 on one of Mochet's recumbents.
On 7 July 1933, at a Paris velodrome, Faure rode a modified Vélo-Velocar in one hour, beating an almost 20-year-old hour record held by Oscar Egg, and attracting a great deal of attention.
When the Union Cycliste Internationale (UCI) met in February 1934, manufacturers of 'upright' bicycles lobbied to have Faure's one-hour record declared invalid. On 1 April 1934, the UCI published a new definition of a racing bicycle that specified how high the bottom bracket could be above the ground, how far it could be in front of the seat and how close it could be to the front wheel. The new definition effectively banned recumbents from UCI events for a combination of tradition, safety, and economic reasons.
Charles Mochet died a short time after the ban was enacted, still protesting against the UCI decision, and the firm continued to make recumbents under his widow and, later, Georges Mochet until at least 1941 for a limited number of customers. Their final versions were a single-chain design named the 'Vélorizontal', the final model using a 'Cyclo' four-speed gear.
After the UCI decision, Faure continued to race, and consistently beat upright bicycles with the Velocar. In 1938, Faure and Mochet's son, Georges, began adding fairings to the Velocar in hopes of bettering the world record of one hour for a bicycle with aerodynamic components. On 5 March 1938, Faure rode a faired Velocar 50.537 kilometers in an hour and became the first cyclist to travel more than 50 kilometers in an hour without the aid of a pace vehicle.
The UCI ban on recumbent bicycles and other aerodynamic improvements virtually stopped development of recumbents for four decades and remains in force. Although recumbent designs continued to crop up over the years they were mainly the work of lone enthusiasts and numbers remained insignificant until the 1970s. Georges Mochet died in 2008.
1970s resurgence and the IHPVA
While developments had been made in this fallow period by Paul Rinkowski and others, the modern recumbent movement was given a boost in 1969 when the Ground Hugger by Robert Riley was featured in Popular Mechanics. There was also the work of Chester Kyle and particularly David Gordon Wilson of MIT, two Americans who opposed the UCI restrictions and continued to work on fairings and recumbents. In 1974, they also nucleated the International Human Power speed Championship in Long Beach, California, from which the IHPVA grew. Kyle and his students had been experimenting with fairings for upright bicycles, also banned by the UCI. In 1975 the brothers John and Randy Schlitter started producing recumbents at their company, Rans, and became the first U.S. company to do so.
In 1978, the "Vélérique" is the very first commercialized recumbent bicycle (fully faired), by the Belgian Erik Abergen.
The Avatar 2000, a LWB bike very much like the current Easy Racers products, arrived in 1979. It was featured in the 1983 film Brainstorm, ridden by Christopher Walken, and in the popular cycling reference Richard's Bicycle Book by Richard Ballantine. From 1983 to 1991 Steven Roberts toured the U.S. in a modified Avatar, pulling a trailer with solar panels and a laptop, gaining press coverage and writing the book Computing Across America. A faired Avatar 2000 was the first two-wheeler to beat the European Vector three-wheeler in the streamliner races. For about ten years afterward, speed records were exchanged between Easy Racers with Freddy Markham in the cockpit and the Lightning Team. So America's strength became the flying 200 meter sprint in the streamliner division. The oil crises of the 1970s sparked a resurgence in cycling coincident with the arrival of these "new" designs.
A parallel but somewhat separate scene grew up in Europe, with the first European human power championships being held in 1983. The European scene was more dominated by competition than was the US, with the result that European bikes are more likely to be low SWB machines, while LWB are much more popular in the US (although there have been some notable European LWB bikes, such as the Peer Gynt).
In the 1980s
In 1984, Linear Recumbents of Iowa began producing bicycles. In 2002, Linear Manufacturing's assets were bought by Bicycle Man LLC and moved to New York. Since then owner Peter Stull has been working with senior engineering students at Alfred University, local engineers and machinists utilizing available technology including computer FEA testing to improve their recumbent bikes.
In the UK in the 1980s, the most publicised recumbent cycle in the UK was the delta configuration, sometime electrically powered Sinclair C5. Although sold as an "electric car", the C5 could be characterised as a recumbent tricycle with electrical assistance.
A study by Bussolari and Nadel (1989) led them to pick a recumbent riding position for the Daedalus flight even though the English Channel crossing was accomplished in the Gossamer Albatross with an upright position. Drela in 1998 confirmed "that there was no significant difference in power output between recumbent and conventional bicycling."
In the 2000s
Three of the largest recumbent manufacturers in the US went out of business after the 1990s, including BikeE (August 2002), ATP-Vision (early 2004) and Burley Design Cooperative (September 2006).
Performance
Over distances recumbent bicycles outperform upright bicycles as evidenced by their dominance in ultra-distance events like 24 hours at Sebring. Official speed records for recumbents are governed by the rules of the International Human Powered Vehicle Association. A number of records are recognised, the fastest of which is the "flying 200 m", a distance of 200 m on level ground from a flying start with a maximum allowable tailwind of 1.66 m/s. The current record is , set by Todd Reichert of Canada in a fully faired front-wheel-drive recumbent lowracer bicycle. The official record for an upright bicycle under IHPVA-legal conditions (but at sea level, not high altitude) is set by Jim Glover in 1986 with an English-made Moulton bicycle with a USA-made hardshell fairing around him and the bike.
The IHPVA hour record is , set by Sam Whittingham on 19 July 2009. The latest known hour record is 92.432 km kilometers (57.434 miles), set by Francesco Russo of Switzerland, using Metastretto on the DEKRA Test Oval track in Klettwitz, Germany, 26.06.2016
The equivalent record for an upright bicycle is , set by Victor Campenaerts in 2019. The UCI no longer considers the bike Chris Boardman rode for his 1996 record to be in compliance with its definition of an upright bicycle. Boardman's Monocoque bike was designed by Mike Burrows, whose Windcheetah recumbent trike (see above) also holds the record from Land's End to John o' Groats, in 41 h 4 min 22 s with Andy Wilkinson riding.
In 2003, Rob English took on and beat the UK 4-man pursuit champions VC St Raphael in a 4000 m challenge race at Reading, beating them by a margin of 4 min 55.5 s to 5 min 6.87 s – and dropping one of the St Raphael riders along the way.
In 2009 Team RANS won the Race Across America (RAAM) on recumbents.
| Technology | Human-powered transport | null |
167258 | https://en.wikipedia.org/wiki/Dome | Dome | A dome () is an architectural element similar to the hollow upper half of a sphere. There is significant overlap with the term cupola, which may also refer to a dome or a structure on top of a dome. The precise definition of a dome has been a matter of controversy and there are a wide variety of forms and specialized terms to describe them.
A dome can rest directly upon a rotunda wall, a drum, or a system of squinches or pendentives used to accommodate the transition in shape from a rectangular or square space to the round or polygonal base of the dome. The dome's apex may be closed or may be open in the form of an oculus, which may itself be covered with a roof lantern and cupola.
Domes have a long architectural lineage that extends back into prehistory. Domes were built in ancient Mesopotamia, and they have been found in Persian, Hellenistic, Roman, and Chinese architecture in the ancient world, as well as among a number of indigenous building traditions throughout the world. Dome structures were common in both Byzantine architecture and Sasanian architecture, which influenced that of the rest of Europe and Islam in the Middle Ages. The domes of European Renaissance architecture spread from Italy in the early modern period, while domes were frequently employed in Ottoman architecture at the same time. Baroque and Neoclassical architecture took inspiration from Roman domes.
Advancements in mathematics, materials, and production techniques resulted in new dome types. Domes have been constructed over the centuries from mud, snow, stone, wood, brick, concrete, metal, glass, and plastic. The symbolism associated with domes includes mortuary, celestial, and governmental traditions that have likewise altered over time. The domes of the modern world can be found over religious buildings, legislative chambers, sports stadiums, and a variety of functional structures.
Etymology
The English word "dome" ultimately derives from the ancient Greek and Latin domus ("house"), which, up through the Renaissance, labeled a revered house, such as a Domus Dei, or "House of God", regardless of the shape of its roof. This is reflected in the uses of the Italian word duomo, the German/Icelandic/Danish word dom ("cathedral"), and the English word dome as late as 1656, when it meant a "Town-House, Guild-Hall, State-House, and Meeting-House in a city." The French word dosme came to acquire the meaning of a cupola vault, specifically, by 1660. This French definition gradually became the standard usage of the English dome in the eighteenth century as many of the most impressive Houses of God were built with monumental domes, and in response to the scientific need for more technical terms.
Definitions
Across the ancient world, curved-roof structures that would today be called domes had a number of different names reflecting a variety of shapes, traditions, and symbolic associations. The shapes were derived from traditions of pre-historic shelters made from various impermanent pliable materials and were only later reproduced as vaulting in more durable materials. The hemispherical shape often associated with domes today derives from Greek geometry and Roman standardization, but other shapes persisted, including a pointed and bulbous tradition inherited by some early Islamic mosques.
Modern academic study of the topic has been controversial and confused by inconsistent definitions, such as those for cloister vaults and domical vaults. Dictionary definitions of the term "dome" are often general and imprecise. Generally-speaking, it "is non-specific, a blanket-word to describe an hemispherical or similar spanning element." Published definitions include: hemispherical roofs alone; revolved arches; and vaults on a circular base alone, circular or polygonal base, circular, elliptical, or polygonal base, or an undefined area. Definitions specifying vertical sections include: semicircular, pointed, or bulbous; semicircular, segmental or pointed; semicircular, segmental, pointed, or bulbous; semicircular, segmental, elliptical, or bulbous; and high profile, hemispherical, or flattened.
Sometimes called "false" domes, corbel domes achieve their shape by extending each horizontal layer of stones inward slightly farther than the lower one until they meet at the top. A "false" dome may also refer to a wooden dome. The Italian use of the term finto, meaning "false", can be traced back to the 17th century in the use of vaulting made of reed mats and gypsum mortar. "True" domes are said to be those whose structure is in a state of compression, with constituent elements of wedge-shaped voussoirs, the joints of which align with a central point. The validity of this is unclear, as domes built underground with corbelled stone layers are in compression from the surrounding earth.
The precise definition of "pendentive" has also been a source of academic contention, such as whether or not corbelling is permitted under the definition and whether or not the lower portions of a sail vault should be considered pendentives. Domes with pendentives can be divided into two kinds: simple and compound. In the case of the simple dome, the pendentives are part of the same sphere as the dome itself; however, such domes are rare. In the case of the more common compound dome, the pendentives are part of the surface of a larger sphere below that of the dome itself and form a circular base for either the dome or a drum section.
The fields of engineering and architecture have lacked common language for domes, with engineering focused on structural behavior and architecture focused on form and symbolism. Additionally, new materials and structural systems in the 20th century have allowed for large dome-shaped structures that deviate from the traditional compressive structural behavior of masonry domes. Popular usage of the term has expanded to mean "almost any long-span roofing system".
Elements
The word "cupola" is another word for "dome", and is usually used for a small dome upon a roof or turret. "Cupola" has also been used to describe the inner side of a dome. The top of a dome is the "crown". The inner side of a dome is called the "intrados" and the outer side is called the "extrados". As with arches, the "springing" of a dome is the base level from which the dome rises and the "haunch" is the part that lies roughly halfway between the base and the top. Domes can be supported by an elliptical or circular wall called a "drum". If this structure extends to ground level, the round building may be called a "rotunda". Drums are also called "tholobates" and may or may not contain windows. A "tambour" or "lantern" is the equivalent structure over a dome's oculus, supporting a cupola.
When the base of the dome does not match the plan of the supporting walls beneath it (for example, a dome's circular base over a square bay), techniques are employed to bridge the two. One technique is to use corbelling, progressively projecting horizontal layers from the top of the supporting wall to the base of the dome, such as the corbelled triangles often used in Seljuk and Ottoman architecture. The simplest technique is to use diagonal lintels across the corners of the walls to create an octagonal base. Another is to use arches to span the corners, which can support more weight. A variety of these techniques use what are called "squinches". A squinch can be a single arch or a set of multiple projecting nested arches placed diagonally over an internal corner. Squinch forms also include trumpet arches, niche heads (or half-domes), trumpet arches with "anteposed" arches, and muqarnas arches. Squinches transfer the weight of a dome across the gaps created by the corners and into the walls. Pendentives are triangular sections of a sphere, like concave spandrels between arches, and transition from the corners of a square bay to the circular base of a dome. The curvature of the pendentives is that of a sphere with a diameter equal to the diagonal of the square bay. Pendentives concentrate the weight of a dome into the corners of the bay.
Materials
The earliest domes in the Middle East were built with mud-brick and, eventually, with baked brick and stone. Domes of wood allowed for wide spans due to the relatively light and flexible nature of the material and were the normal method for domed churches by the 7th century, although most domes were built with the other less flexible materials. Wooden domes were protected from the weather by roofing, such as copper or lead sheeting. Domes of cut stone were more expensive and never as large, and timber was used for large spans where brick was unavailable.
Roman concrete used an aggregate of stone with a powerful mortar. The aggregate transitioned over the centuries to pieces of fired clay, then to Roman bricks. By the sixth century, bricks with large amounts of mortar were the principle vaulting materials. Pozzolana appears to have only been used in central Italy. Brick domes were the favored choice for large-space monumental coverings until the Industrial Age, due to their convenience and dependability. Ties and chains of iron or wood could be used to resist stresses.
In the Middle East and Central Asia, domes and drums constructed from mud brick and baked brick were sometimes covered with brittle ceramic tiles on the exterior to protect against rain and snow.
The new building materials of the 19th century and a better understanding of the forces within structures from the 20th century opened up new possibilities. Iron and steel beams, steel cables, and pre-stressed concrete eliminated the need for external buttressing and enabled much thinner domes. Whereas earlier masonry domes may have had a radius to thickness ratio of 50, the ratio for modern domes can be in excess of 800. The lighter weight of these domes not only permitted far greater spans, but also allowed for the creation of large movable domes over modern sports stadiums.
Experimental rammed earth domes were made as part of work on sustainable architecture at the University of Kassel in 1983.
Shapes and internal forces
A masonry dome produces thrusts downward and outward. They are thought of in terms of two kinds of forces at right angles from one another: meridional forces (like the meridians, or lines of longitude, on a globe) are compressive only, and increase towards the base, while hoop forces (like the lines of latitude on a globe) are in compression at the top and tension at the base, with the transition in a hemispherical dome occurring at an angle of 51.8 degrees from the top. The thrusts generated by a dome are directly proportional to the weight of its materials. Grounded hemispherical domes generate significant horizontal thrusts at their haunches.
The outward thrusts in the lower portion of a hemispherical masonry dome can be counteracted with the use of chains incorporated around the circumference or with external buttressing, although cracking along the meridians is natural. For small or tall domes with less horizontal thrust, the thickness of the supporting arches or walls can be enough to resist deformation, which is why drums tend to be much thicker than the domes they support.
Unlike voussoir arches, which require support for each element until the keystone is in place, domes are stable during construction as each level is made a complete and self-supporting ring. The upper portion of a masonry dome is always in compression and is supported laterally, so it does not collapse except as a whole unit and a range of deviations from the ideal in this shallow upper cap are equally stable. Because voussoir domes have lateral support, they can be made much thinner than corresponding arches of the same span. For example, a hemispherical dome can be 2.5 times thinner than a semicircular arch, and a dome with the profile of an equilateral arch can be thinner still.
The optimal shape for a masonry dome of equal thickness provides for perfect compression, with none of the tension or bending forces against which masonry is weak. For a particular material, the optimal dome geometry is called the funicular surface, the comparable shape in three dimensions to a catenary curve for a two-dimensional arch. Adding a weight to the top of a pointed dome, such as the heavy cupola at the top of Florence Cathedral, changes the optimal shape to more closely match the actual pointed shape of the dome. The pointed profiles of many Gothic domes more closely approximate the optimal dome shape than do hemispheres, which were favored by Roman and Byzantine architects due to the circle being considered the most perfect of forms.
Symbolism
According to E. Baldwin Smith, from the late Stone Age the dome-shaped tomb was used as a reproduction of the ancestral, god-given shelter made permanent as a venerated home of the dead. The instinctive desire to do this resulted in widespread domical mortuary traditions across the ancient world, from the stupas of India to the tholos tombs of Iberia. By Hellenistic and Roman times, the domical tholos had become the customary cemetery symbol.
Domes and tent-canopies were also associated with the heavens in Ancient Persia and the Hellenistic-Roman world. A dome over a square base reflected the geometric symbolism of those shapes. The circle represented perfection, eternity, and the heavens. The square represented the earth. An octagon was intermediate between the two. The distinct symbolism of the heavenly or cosmic tent stemming from the royal audience tents of Achaemenid and Indian rulers was adopted by Roman rulers in imitation of Alexander the Great, becoming the imperial baldachin. This probably began with Nero, whose "Golden House" also made the dome a feature of palace architecture.
The dual sepulchral and heavenly symbolism was adopted by early Christians in both the use of domes in architecture and in the ciborium, a domical canopy like the baldachin used as a ritual covering for relics or the church altar. The celestial symbolism of the dome, however, was the preeminent one by the Christian era. In the early centuries of Islam, domes were closely associated with royalty. A dome built in front of the mihrab of a mosque, for example, was at least initially meant to emphasize the place of a prince during royal ceremonies. Over time such domes became primarily focal points for decoration or the direction of prayer. The use of domes in mausoleums can likewise reflect royal patronage or be seen as representing the honor and prestige that domes symbolized, rather than having any specific funerary meaning. The wide variety of dome forms in medieval Islam reflected dynastic, religious, and social differences as much as practical building considerations.
Acoustics
Because domes are concave from below, they can reflect sound and create echoes. A dome may have a "whispering gallery" at its base that at certain places transmits distinct sound to other distant places in the gallery. The half-domes over the apses of Byzantine churches helped to project the chants of the clergy. Although this can complement music, it may make speech less intelligible, leading Francesco Giorgi in 1535 to recommend vaulted ceilings for the choir areas of a church, but a flat ceiling filled with as many coffers as possible for where preaching would occur.
Cavities in the form of jars built into the inner surface of a dome may serve to compensate for this interference by diffusing sound in all directions, eliminating echoes while creating a "divine effect in the atmosphere of worship." This technique was written about by Vitruvius in his Ten Books on Architecture, which describes bronze and earthenware resonators. The material, shape, contents, and placement of these cavity resonators determine the effect they have: reinforcing certain frequencies or absorbing them.
Types
Beehive dome
Also called a corbelled dome, cribbed dome, or false dome, these are different from a 'true dome' in that they consist of purely horizontal layers. As the layers get higher, each is slightly cantilevered, or corbeled, toward the center until meeting at the top. A monumental example is the Mycenaean Treasury of Atreus from the late Bronze Age.
Braced dome
A single or double layer space frame in the form of a dome, a braced dome is a generic term that includes ribbed, Schwedler, three-way grid, lamella or Kiewitt, lattice, and geodesic domes. The different terms reflect different arrangements in the surface members. Braced domes often have a very low weight and are usually used to cover spans of up to 150 meters. Often prefabricated, their component members can either lie on the dome's surface of revolution, or be straight lengths with the connecting points or nodes lying upon the surface of revolution. Single-layer structures are called frame or skeleton types and double-layer structures are truss types, which are used for large spans. When the covering also forms part of the structural system, it is called a stressed skin type. The formed surface type consists of sheets joined at bent edges to form the structure.
Cloister vault
Also called domical vaults (a term sometimes also applied to sail vaults), polygonal domes, coved domes, gored domes, segmental domes (a term sometimes also used for saucer domes), paneled vaults, or pavilion vaults, these are domes that maintain a polygonal shape in their horizontal cross section. The component curved surfaces of these vaults are called severies, webs, or cells. The earliest known examples date to the first century BC, such as the Tabularium of Rome from 78 BC. Others include the Baths of Antoninus in Carthage (145–160) and the Palatine Chapel at Aachen (13th – 14th century). The most famous example is the Renaissance octagonal dome of Filippo Brunelleschi over the Florence Cathedral. Thomas Jefferson, the third president of the United States, installed an octagonal dome above the West front of his plantation house, Monticello.
Compound dome
Also called domes on pendentives or pendentive domes (a term also applied to sail vaults), compound domes have pendentives that support a smaller diameter dome immediately above them, as in the Hagia Sophia, or a drum and dome, as in many Renaissance and post-Renaissance domes, with both forms resulting in greater height.
Crossed-arch dome
One of the earliest types of ribbed vault, the first known examples are found in the Great Mosque of Córdoba in the 10th century. Rather than meeting in the center of the dome, the ribs characteristically intersect one another off-center, forming an empty polygonal space in the center. Geometry is a key element of the designs, with the octagon being perhaps the most popular shape used. Whether the arches are structural or purely decorative remains a matter of debate. The type may have an eastern origin, although the issue is also unsettled. Examples are found in Spain, North Africa, Armenia, Iran, France, and Italy.
Ellipsoidal dome
The ellipsoidal dome is a surface formed by the rotation around a vertical axis of a semi-ellipse. Like other "rotational domes" formed by the rotation of a curve around a vertical axis, ellipsoidal domes have circular bases and horizontal sections and are a type of "circular dome" for that reason.
Geodesic dome
Geodesic domes are the upper portion of geodesic spheres. They are composed of a framework of triangles in a polyhedron pattern. The structures are named for geodesics and are based upon geometric shapes such as icosahedrons, octahedrons or tetrahedrons. Such domes can be created using a limited number of simple elements and joints and efficiently resolve a dome's internal forces. Their efficiency is said to increase with size. Although not first invented by Buckminster Fuller, they are associated with him because he designed many geodesic domes and patented them in the United States.
Hemispherical dome
The hemispherical dome is a surface formed by the rotation around a vertical axis of a semicircle. Like other "rotational domes" formed by the rotation of a curve around a vertical axis, hemispherical domes have circular bases and horizontal sections and are a type of "circular dome" for that reason. They experience vertical compression along their meridians, but horizontally experience compression only in the portion above 51.8 degrees from the top. Below this point, hemispherical domes experience tension horizontally, and usually require buttressing to counteract it. According to E. Baldwin Smith, it was a shape likely known to the Assyrians, defined by Greek theoretical mathematicians, and standardized by Roman builders.
Onion dome
Bulbous domes bulge out beyond their base diameters, offering a profile greater than a hemisphere. An onion dome is a greater than hemispherical dome with a pointed top in an ogee profile. They are found in the Near East, Middle East, Persia, and India and may not have had a single point of origin. Their appearance in northern Russian architecture predates the Tatar occupation of Russia and so is not easily explained as the result of that influence. They became popular in the second half of the 15th century in the Low Countries of Northern Europe, possibly inspired by the finials of minarets in Egypt and Syria, and developed in the 16th and 17th centuries in the Netherlands before spreading to Germany, becoming a popular element of the baroque architecture of Central Europe. German bulbous domes were also influenced by Russian and Eastern European domes. The examples found in various European architectural styles are typically wooden. Examples include Kazan Church in Kolomenskoye and the Brighton Pavilion by John Nash. In Islamic architecture, they are typically made of masonry, rather than timber, with the thick and heavy bulging portion serving to buttress against the tendency of masonry domes to spread at their bases. The Taj Mahal is a famous example.
Oval dome
An oval dome is a dome of oval shape in plan, profile, or both. The term comes from the Latin ovum, meaning "egg". The earliest oval domes were used by convenience in corbelled stone huts as rounded but geometrically undefined coverings, and the first examples in Asia Minor date to around 4000 B.C. The geometry was eventually defined using combinations of circular arcs, transitioning at points of tangency. If the Romans created oval domes, it was only in exceptional circumstances. The Roman foundations of the oval plan Church of St. Gereon in Cologne point to a possible example. Domes in the Middle Ages also tended to be circular, though the church of Santo Tomás de las Ollas in Spain has an oval dome over its oval plan. Other examples of medieval oval domes can be found covering rectangular bays in churches. Oval plan churches became a type in the Renaissance and popular in the Baroque style. The dome built for the basilica of Vicoforte by Francesco Gallo was one of the largest and most complex ever made. Although the ellipse was known, in practice, domes of this shape were created by combining segments of circles. Popular in the 16th and 17th centuries, oval and elliptical plan domes can vary their dimensions in three axes or two axes. A sub-type with the long axis having a semicircular section is called a Murcia dome, as in the Chapel of the Junterones at Murcia Cathedral. When the short axis has a semicircular section, it is called a Melon dome.
Paraboloid dome
A paraboloid dome is a surface formed by the rotation around a vertical axis of a sector of a parabola. Like other "rotational domes" formed by the rotation of a curve around a vertical axis, paraboloid domes have circular bases and horizontal sections and are a type of "circular dome" for that reason. Because of their shape, paraboloid domes experience only compression, both radially and horizontally.
Sail dome
Also called sail vaults, handkerchief vaults, domical vaults (a term sometimes also applied to cloister vaults), pendentive domes (a term that has also been applied to compound domes), Bohemian vaults, or Byzantine domes, this type can be thought of as pendentives that, rather than merely touching each other to form a circular base for a drum or compound dome, smoothly continue their curvature to form the dome itself. The dome gives the impression of a square sail pinned down at each corner and billowing upward. These can also be thought of as saucer domes upon pendentives. Sail domes are based upon the shape of a hemisphere and are not to be confused with elliptic parabolic vaults, which appear similar but have different characteristics. In addition to semicircular sail vaults there are variations in geometry such as a low rise to span ratio or covering a rectangular plan. Sail vaults of all types have a variety of thrust conditions along their borders, which can cause problems, but have been widely used from at least the sixteenth century. The second floor of the Llotja de la Seda is covered by a series of nine meter wide sail vaults.
Saucer dome
Also called segmental domes (a term sometimes also used for cloister vaults), or calottes, these have profiles of less than half a circle. Because they reduce the portion of the dome in tension, these domes are strong but have increased radial thrust. Many of the largest existing domes are of this shape.
Masonry saucer domes, because they exist entirely in compression, can be built much thinner than other dome shapes without becoming unstable. The trade-off between the proportionately increased horizontal thrust at their abutments and their decreased weight and quantity of materials may make them more economical, but they are more vulnerable to damage from movement in their supports.
Umbrella dome
Also called gadrooned, fluted, organ-piped, pumpkin, melon, ribbed, parachute, scalloped, or lobed domes, these are a type of dome divided at the base into curved segments, which follow the curve of the elevation. "Fluted" may refer specifically to this pattern as an external feature, such as was common in Mamluk Egypt. The "ribs" of a dome are the radial lines of masonry that extend from the crown down to the springing. The central dome of the Hagia Sophia uses the ribbed method, which accommodates a ring of windows between the ribs at the base of the dome. The central dome of St. Peter's Basilica also uses this method.
History
Early history and simple domes
Cultures from pre-history to modern times constructed domed dwellings using local materials. Although it is not known when the first dome was created, sporadic examples of early domed structures have been discovered. The earliest discovered may be four small dwellings made of Mammoth tusks and bones. The first was found by a farmer in Mezhirich, Ukraine, in 1965 while he was digging in his cellar and archaeologists unearthed three more. They date from 19,280 – 11,700 BC.
In modern times, the creation of relatively simple dome-like structures has been documented among various indigenous peoples around the world. The wigwam was made by Native Americans using arched branches or poles covered with grass or hides. The Efé people of central Africa construct similar structures, using leaves as shingles. Another example is the igloo, a shelter built from blocks of compact snow and used by the Inuit, among others. The Himba people of Namibia construct "desert igloos" of wattle and daub for use as temporary shelters at seasonal cattle camps, and as permanent homes by the poor. Extraordinarily thin domes of sun-baked clay 20 feet in diameter, 30 feet high, and nearly parabolic in curve, are known from Cameroon.
The historical development from structures like these to more sophisticated domes is not well documented. That the dome was known to early Mesopotamia may explain the existence of domes in both China and the West in the first millennium BC. Another explanation, however, is that the use of the dome shape in construction did not have a single point of origin and was common in virtually all cultures long before domes were constructed with enduring materials.
Corbelled stone domes have been found from the Neolithic period in the ancient Near East, and in the Middle East to Western Europe from antiquity. The kings of Achaemenid Persia held audiences and festivals in domical tents derived from the nomadic traditions of central Asia. Simple domical mausoleums existed in the Hellenistic period. Indian bas-relief sculptures from Sāñcī (1st century BC), Bhārhut (2nd century BC), and Amarāvatī (2nd century BC), show domed huts, shrines, and pavilions. The remains of a large domed circular hall in the Parthian capital city of Nyssa has been dated to perhaps the first century AD, showing "...the existence of a monumental domical tradition in Central Asia that had hitherto been unknown and which seems to have preceded Roman Imperial monuments or at least to have grown independently from them." It likely had a wooden dome.
Persian domes
Persian architecture likely inherited an architectural tradition of dome-building dating back to the earliest Mesopotamian domes. Due to the scarcity of wood in many areas of the Iranian plateau and Greater Iran, domes were an important part of vernacular architecture throughout Persian history. The Persian invention of the squinch, a series of concentric arches forming a half-cone over the corner of a room, enabled the transition from the walls of a square chamber to an octagonal base for a dome in a way reliable enough for large constructions and domes moved to the forefront of Persian architecture as a result. Pre-Islamic domes in Persia are commonly semi-elliptical, with pointed domes and those with conical outer shells being the majority of the domes in the Islamic periods.
The area of north-eastern Iran was, along with Egypt, one of two areas notable for early developments in Islamic domed mausoleums, which appear in the tenth century. The Samanid Mausoleum in Transoxiana dates to no later than 943 and is the first to have squinches create a regular octagon as a base for the dome, which then became the standard practice. Cylindrical or polygonal plan tower tombs with conical roofs over domes also exist beginning in the 11th century.
The Seljuk Empire's notables built tomb-towers, called "Turkish Triangles", as well as cube mausoleums covered with a variety of dome forms. Seljuk domes included conical, semi-circular, and pointed shapes in one or two shells. Shallow semi-circular domes are mainly found from the Seljuk era. The double-shell domes were either discontinuous or continuous. The domed enclosure of the Jameh Mosque of Isfahan, built in 1086-7 by Nizam al-Mulk, was the largest masonry dome in the Islamic world at that time, had eight ribs, and introduced a new form of corner squinch with two quarter domes supporting a short barrel vault. In 1088 Tāj-al-Molk, a rival of Nizam al-Mulk, built another dome at the opposite end of the same mosque with interlacing ribs forming five-pointed stars and pentagons. This is considered the landmark Seljuk dome, and may have inspired subsequent patterning and the domes of the Il-Khanate period. The use of tile and of plain or painted plaster to decorate dome interiors, rather than brick, increased under the Seljuks.
Beginning in the Ilkhanate, Persian domes achieved their final configuration of structural supports, zone of transition, drum, and shells, and subsequent evolution was restricted to variations in form and shell geometry. Characteristic of these domes are the use of high drums and several types of discontinuous double-shells, and the development of triple-shells and internal stiffeners occurred at this time. The construction of tomb towers decreased. The 7.5 meter wide double dome of Soltan Bakht Agha Mausoleum (1351–1352) is the earliest known example in which the two shells of the dome have significantly different profiles, which spread rapidly throughout the region. The development of taller drums also continued into the Timurid period. The large, bulbous, fluted domes on tall drums that are characteristic of 15th century Timurid architecture were the culmination of the Central Asian and Iranian tradition of tall domes with glazed tile coverings in blue and other colors.
The domes of the Safavid dynasty (1501–1732) are characterized by a distinctive bulbous profile and are considered the last generation of Persian domes. They are generally thinner than earlier domes and are decorated with a variety of colored glazed tiles and complex vegetal patterns, and they were influential on those of other Islamic styles, such as the Mughal architecture of India. An exaggerated style of onion dome on a short drum, as can be seen at the Shah Cheragh (1852–1853), first appeared in the Qajar period. Domes have remained important in modern mausoleums, and domed cisterns and icehouses remain common sights in the countryside.
East Asian domes
Very little has survived of ancient Chinese architecture, due to the extensive use of timber as a building material. Brick and stone vaults used in tomb construction have survived, and the corbeled dome was used, rarely, in tombs and temples. The earliest true domes found in Chinese tombs were shallow cloister vaults, called simian jieding, derived from the Han use of barrel vaulting. Unlike the cloister vaults of western Europe, the corners are rounded off as they rise. The first known example is a brick tomb dating from the end of the Western Han period, near the modern city of Xiangcheng in Henan Province. These four-sided domes used small interlocking bricks and enabled a square space near the entrance of a tomb large enough for several people that may have been used for funeral ceremonies. The interlocking brick technique was rapidly adopted and four-sided domes became widespread outside Henan by the end of the first century AD.
A model of a tomb found with a shallow true dome from the late Han dynasty (206 BC – 220 AD) can be seen at the Guangzhou Museum (Canton). Another, the Lei Cheng Uk Han Tomb, found in Hong Kong in 1955, has a design common among Eastern Han dynasty (25 AD – 220 AD) tombs in South China: a barrel vaulted entrance leading to a domed front hall with barrel vaulted chambers branching from it in a cross shape. It is the only such tomb that has been found in Hong Kong and is exhibited as part of the Hong Kong Museum of History.
During the Three Kingdoms period (220–280), the "cross-joint dome" (siyuxuanjinshi) was developed under the Wu and Western Jin dynasties south of the Yangtze River, with arcs building out from the corners of a square room until they met and joined at the center. These domes were stronger, had a steeped angle, and could cover larger areas than the relatively shallow cloister vaults. Over time, they were made taller and wider. There were also corbel vaults, called diese, although these are the weakest type. Some tombs of the Song dynasty (960–1279) have beehive domes.
The Seokguram Grotto (751), built in the Korean city of Gyeongju during the Unified Silla period, includes a domed chamber 7.2 meters wide covering a statue of the Buddha. The dome is made from blocks of granite, with the flat cap of the dome decorated with a lotus flower motif. The dome is unique in north-east Asia.
The Buddhism monastery Baoguo near Ningbo has three domes dated to 1013. The Daoist monastery Yongle Gong in Shanxi has domes in its Hall of the Three Purities, from the 13th century.
The Fenghuang Mosque in Hangzhou has three domes along its back wall dating to the Yuan dynasty. The central dome is 8 meters in diameter and covered by an octagonal roof. The north and south flanking domes are 6.8 meters and 7.2 meters wide, respectively, and covered by hexagonal roofs. The zones of transition under the domes use a tiered system similar to muqarnas or the corner bracketing found in Chinese temples.
Roman and Byzantine domes
Roman domes are found in baths, villas, palaces, and tombs. oculi are common features. They are customarily hemispherical in shape and partially or totally concealed on the exterior. To buttress the horizontal thrusts of a large hemispherical masonry dome, the supporting walls were built up beyond the base to at least the haunches of the dome, and the dome was then also sometimes covered with a conical or polygonal roof.
Domes reached monumental size in the Roman Imperial period. Roman baths played a leading role in the development of domed construction in general, and monumental domes in particular. Modest domes in baths dating from the 2nd and 1st centuries BC are seen in Pompeii, in the cold rooms of the Terme Stabiane and the Terme del Foro. However, the extensive use of domes did not occur before the 1st century AD. The growth of domed construction increases under Emperor Nero and the Flavians in the 1st century AD, and during the 2nd century. Centrally-planned halls become increasingly important parts of palace and palace villa layouts beginning in the 1st century, serving as state banqueting halls, audience rooms, or throne rooms. The Pantheon, a temple in Rome completed by Emperor Hadrian as part of the Baths of Agrippa, is the most famous, best preserved, and largest Roman dome. Segmented domes, made of radially concave wedges or of alternating concave and flat wedges, appear under Hadrian in the 2nd century and most preserved examples of this style date from this period.
In the 3rd century, Imperial mausoleums began to be built as domed rotundas, rather than as tumulus structures or other types, following similar monuments by private citizens. The technique of building lightweight domes with interlocking hollow ceramic tubes further developed in North Africa and Italy in the late third and early fourth centuries. In the 4th century, Roman domes proliferated due to changes in the way domes were constructed, including advances in centering techniques and the use of brick ribbing. The material of choice in construction gradually transitioned during the 4th and 5th centuries from stone or concrete to lighter brick in thin shells. Baptisteries began to be built in the manner of domed mausoleums during the 4th century in Italy. The octagonal Lateran baptistery or the baptistery of the Holy Sepulchre may have been the first, and the style spread during the 5th century. By the 5th century, structures with small-scale domed cross plans existed across the Christian world.
With the end of the Western Roman Empire, domes became a signature feature of the church architecture of the surviving Eastern Roman — or "Byzantine" — Empire. 6th-century church building by the Emperor Justinian used the domed cross unit on a monumental scale, and his architects made the domed brick-vaulted central plan standard throughout the Roman east. This divergence with the Roman west from the second third of the 6th century may be considered the beginning of a "Byzantine" architecture. Justinian's Hagia Sophia was an original and innovative design with no known precedents in the way it covers a basilica plan with dome and semi-domes. Periodic earthquakes in the region have caused three partial collapses of the dome and necessitated repairs.
"Cross-domed units", a more secure structural system created by bracing a dome on all four sides with broad arches, became a standard element on a smaller scale in later Byzantine church architecture. The Cross-in-square plan, with a single dome at the crossing or five domes in a quincunx pattern, became widely popular in the Middle Byzantine period (c. 843–1204). It is the most common church plan from the tenth century until the fall of Constantinople in 1453. Resting domes on circular or polygonal drums pierced with windows eventually became the standard style, with regional characteristics.
In the Byzantine period, domes were normally hemispherical and had, with occasional exceptions, windowed drums. All of the surviving examples in Constantinople are ribbed or pumpkin domes, with the divisions corresponding to the number of windows. Roofing for domes ranged from simple ceramic tile to more expensive, more durable, and more form-fitting lead sheeting. Metal clamps between stone cornice blocks, metal tie rods, and metal chains were also used to stabilize domed construction. The technique of using double shells for domes, although revived in the Renaissance, originated in Byzantine practice.
Arabic and Western European domes
The Syria and Palestine area has a long tradition of domical architecture, including wooden domes in shapes described as "conoid", or similar to pine cones. When the Arab Muslim forces conquered the region, they employed local craftsmen for their buildings and, by the end of the 7th century, the dome had begun to become an architectural symbol of Islam. In addition to religious shrines, such as the Dome of the Rock, domes were used over the audience and throne halls of Umayyad palaces, and as part of porches, pavilions, fountains, towers and the calderia of baths. Blending the architectural features of both Byzantine and Persian architecture, the domes used both pendentives and squinches and were made in a variety of shapes and materials. Although architecture in the region would decline following the movement of the capital to Iraq under the Abbasids in 750, mosques built after a revival in the late 11th century usually followed the Umayyad model. Early versions of bulbous domes can be seen in mosaic illustrations in Syria dating to the Umayyad period. They were used to cover large buildings in Syria after the eleventh century.
Italian church architecture from the late sixth century to the end of the eighth century was influenced less by the trends of Constantinople than by a variety of Byzantine provincial plans. With the crowning of Charlemagne as a new Roman Emperor, Byzantine influences were largely replaced in a revival of earlier Western building traditions. Occasional exceptions include examples of early quincunx churches at Milan and near Cassino. Another is the Palatine Chapel. Its domed octagon design was influenced by Byzantine models. It was the largest dome north of the Alps at that time. Venice, Southern Italy and Sicily served as outposts of Middle Byzantine architectural influence in Italy.
The Great Mosque of Córdoba contains the first known examples of the crossed-arch dome type. The use of corner squinches to support domes was widespread in Islamic architecture by the 10th and 11th centuries. After the ninth century, mosques in North Africa often have a small decorative dome over the mihrab. Additional domes are sometimes used at the corners of the mihrab wall, at the entrance bay, or on the square tower minarets. Egypt, along with north-eastern Iran, was one of two areas notable for early developments in Islamic mausoleums, beginning in the 10th century. Fatimid mausoleums were mostly simple square buildings covered by a dome. Domes were smooth or ribbed and had a characteristic Fatimid "keel" shape profile.
Domes in Romanesque architecture are generally found within crossing towers at the intersection of a church's nave and transept, which conceal the domes externally. They are typically octagonal in plan and use corner squinches to translate a square bay into a suitable octagonal base. They appear "in connection with basilicas almost throughout Europe" between 1050 and 1100. The Crusades, beginning in 1095, also appear to have influenced domed architecture in Western Europe, particularly in the areas around the Mediterranean Sea. The Knights Templar, headquartered at the site, built a series of centrally planned churches throughout Europe modeled on the Church of the Holy Sepulchre, with the Dome of the Rock also an influence. In southwest France, there are over 250 domed Romanesque churches in the Périgord region alone. The use of pendentives to support domes in the Aquitaine region, rather than the squinches more typical of western medieval architecture, strongly implies a Byzantine influence. Gothic domes are uncommon due to the use of rib vaults over naves, and with church crossings usually focused instead by a tall steeple, but there are examples of small octagonal crossing domes in cathedrals as the style developed from the Romanesque.
Star-shaped domes found at the Moorish palace of the Alhambra in Granada, Spain, the Hall of the Abencerrajes (c. 1333–91) and the Hall of the two Sisters (c. 1333–54), are extraordinarily developed examples of muqarnas domes. In the first half of the fourteenth century, stone blocks replaced bricks as the primary building material in the dome construction of Mamluk Egypt and, over the course of 250 years, around 400 domes were built in Cairo to cover the tombs of Mamluk sultans and emirs. Dome profiles were varied, with "keel-shaped", bulbous, ogee, stilted domes, and others being used. On the drum, angles were chamfered, or sometimes stepped, externally and triple windows were used in a tri-lobed arrangement on the faces. Bulbous cupolas on minarets were used in Egypt beginning around 1330, spreading to Syria in the following century. In the fifteenth century, pilgrimages to and flourishing trade relations with the Near East exposed the Low Countries of northwest Europe to the use of bulbous domes in the architecture of the Orient and such domes apparently became associated with the city of Jerusalem. Multi-story spires with truncated bulbous cupolas supporting smaller cupolas or crowns became popular in the sixteenth century.
Russian domes
The multidomed church is a typical form of Russian church architecture that distinguishes Russia from other Orthodox nations and Christian denominations. Indeed, the earliest Russian churches, built just after the Christianization of Kievan Rus', were multi-domed, which has led some historians to speculate about how Russian pre-Christian pagan temples might have looked. Examples of these early churches are the 13-domed wooden Saint Sophia Cathedral in Novgorod (989) and the 25-domed stone Desyatinnaya Church in Kiev (989–996). The number of domes typically has a symbolical meaning in Russian architecture, for example 13 domes symbolize Christ with 12 Apostles, while 25 domes means the same with an additional 12 Prophets of the Old Testament. The multiple domes of Russian churches were often comparatively smaller than Byzantine domes.
Plentiful timber in Russia made wooden domes common and at least partially contributed to the popularity of onion domes, which were easier to shape in wood than in masonry. The earliest stone churches in Russia featured Byzantine style domes, however by the Early Modern era the onion dome had become the predominant form in traditional Russian architecture. The onion dome is a dome whose shape resembles an onion, after which they are named. Such domes are often larger in diameter than the drums they sit on, and their height usually exceeds their width. The whole bulbous structure tapers smoothly to a point. Though the earliest preserved Russian domes of such type date from the 16th century, illustrations from older chronicles indicate they have existed since the late 13th century. Like tented roofs—which were combined with, and sometimes replaced domes in Russian architecture since the 16th century—onion domes initially were used only in wooden churches. Builders introduced them into stone architecture much later, and continued to make their carcasses of either of wood or metal on top of masonry drums.
Russian domes are often gilded or brightly painted. A dangerous technique of chemical gilding using mercury had been applied on some occasions until the mid-19th century, most notably in the giant dome of Saint Isaac's Cathedral. The more modern and safe method of gold electroplating was applied for the first time in gilding the domes of the Cathedral of Christ the Saviour in Moscow, the tallest Eastern Orthodox church in the world.
Ukrainian domes
The domes of the Saint Sophia Cathedral and Dormition Cathedral were remodeled to the helmet-shaped baroque style by Ivan Mazepa in the early 18th century, who also paid for gilding of the domes. Mazepa's reign also included the construction of an octagonal western bay with a baroque dome (1672) and five helmet-shaped domes over Boris and Gleb Cathedral in Chernihiv, which were removed in the 20th century by the Soviet government.
Ottoman domes
The rise of the Ottoman Empire and its spread in Asia Minor and the Balkans coincided with the decline of the Seljuk Turks and the Byzantine Empire. Early Ottoman buildings, for almost two centuries after 1300, were characterized by a blending of Ottoman culture and indigenous architecture, and the pendentive dome was used throughout the empire. The Byzantine dome form was adopted and further developed. Ottoman architecture made exclusive use of the semi-spherical dome for vaulting over even very small spaces, influenced by the earlier traditions of both Byzantine Anatolia and Central Asia. The smaller the structure, the simpler the plan, but mosques of medium size were also covered by single domes.
Early experiments with large domes include the domed square mosques of Çine and Mudurnu under Bayezid I, and the later domed "zawiya-mosques" at Bursa. The Üç Şerefeli Mosque at Edirne developed the idea of the central dome being a larger version of the domed modules used throughout the rest of the structure to generate open space. This idea became important to the Ottoman style as it developed.
The Bayezid II Mosque (1501–1506) in Istanbul begins the classical period in Ottoman architecture, in which the great imperial mosques, with variations, resemble the former Byzantine basilica of Hagia Sophia in having a large central dome with semi-domes of the same span to the east and west. Hagia Sophia's central dome arrangement is largely reproduced in three Ottoman mosques in Istanbul: the Bayezid II Mosque, the Kılıç Ali Pasha Mosque, and the Süleymaniye Mosque. Other Imperial mosques in Istanbul added semi-domes to the north and south, doing away with the basilica plan, starting with the Şehzade Mosque and seen again in later examples such as the Sultan Ahmed I Mosque and the Yeni Cami. The classical period lasted into the 17th century but its peak is associated with the architect Mimar Sinan in the 16th century. In addition to large imperial mosques, he designed hundreds of other monuments, including medium-sized mosques such as the Mihrimah Sultan Mosque, Sokollu Mehmed Pasha Mosque, and Rüstem Pasha Mosque and the tomb of Suleiman the Magnificent, with its double-shell dome. The Süleymaniye Mosque, built from 1550 to 1557, has a main dome 53 meters high with a diameter of 26.5 meters. At the time it was built, the dome was the highest in the Ottoman Empire when measured from sea level, but lower from the floor of the building and smaller in diameter than that of the nearby Hagia Sophia.
Another classical domed mosque type is, like the Byzantine church of Sergius and Bacchus, the domed polygon within a square. Octagons and hexagons were common, such as those of the Üç Şerefeli Mosque (1437–1447) and the Selimiye Mosque in Edirne. The Selimiye Mosque was the first structure built by the Ottomans that had a larger dome than that of the Hagia Sophia. The dome rises above a square bay. Corner semi-domes convert this into an octagon, which muqarnas transition to a circular base. The dome has an average internal diameter of about 31.5 meters, while that of Hagia Sophia averages 31.3 meters. Designed and built by architect Mimar Sinan between 1568 and 1574, when he finished it he was 86 years old, and he considered the mosque his masterpiece.
Italian Renaissance domes
Filippo Brunelleschi's octagonal brick domical vault over Florence Cathedral was built between 1420 and 1436 and the lantern surmounting the dome was completed in 1467. The dome is 42 meters wide and made of two shells. The dome is not itself Renaissance in style, although the lantern is closer. A combination of dome, drum, pendentives, and barrel vaults developed as the characteristic structural forms of large Renaissance churches following a period of innovation in the later fifteenth century. Florence was the first Italian city to develop the new style, followed by Rome and then Venice. Brunelleschi's domes at San Lorenzo and the Pazzi Chapel established them as a key element of Renaissance architecture. His plan for the dome of the Pazzi Chapel in Florence's Basilica of Santa Croce (1430–52) illustrates the Renaissance enthusiasm for geometry and for the circle as geometry's supreme form. This emphasis on geometric essentials would be very influential.
De re aedificatoria, written by Leon Battista Alberti around 1452, recommends vaults with coffering for churches, as in the Pantheon, and the first design for a dome at St. Peter's Basilica in Rome is usually attributed to him, although the recorded architect is Bernardo Rossellino. This would culminate in Bramante's 1505–06 projects for a wholly new St. Peter's Basilica, marking the beginning of the displacement of the Gothic ribbed vault with the combination of dome and barrel vault, which proceeded throughout the sixteenth century. Bramante's initial design was for a Greek cross plan with a large central hemispherical dome and four smaller domes around it in a quincunx pattern. Work began in 1506 and continued under a succession of builders over the next 120 years. The dome was completed by Giacomo della Porta and Domenico Fontana. The publication of Sebastiano Serlio's treatise, one of the most popular architectural treatises ever published, was responsible for the spread of the oval in late Renaissance and Baroque architecture throughout Italy, Spain, France, and central Europe.
The Villa Capra, also known as "La Rotunda", was built by Andrea Palladio from 1565 to 1569 near Vicenza. Its highly symmetrical square plan centers on a circular room covered by a dome, and it proved highly influential on the Georgian architects of 18th century England, architects in Russia, and architects in America, Thomas Jefferson among them. Palladio's two domed churches in Venice are San Giorgio Maggiore (1565–1610) and Il Redentore (1577–92), the latter built in thanksgiving for the end of a bad outbreak of plague in the city. The spread of the Renaissance-style dome outside of Italy began with central Europe, although there was often a stylistic delay of a century or two.
South Asian domes
Hemispherical rock-cut tombs appear to imitate in stone the early bamboo or timber roofed domed huts with central poles known from the pre-Buddhist period. Examples include Sudama cave (3rd century BC) in Bihar, a similar domed chamber at Cannanora in Malabar, and a cave at Guntpalle (1st century BC). A rock-cut hemispherical chamber at Manappuram in Kerala retained a thin central pillar with no structural function. The hemispherical shape of Buddist stupas, likely refined forms of burial mounds, may also reflect earlier wooden dome roof construction, such as at Ghantasala.
Islamic rule over northern and central India brought with it the use of domes constructed with stone, brick and mortar, and iron dowels and cramps. Centering was made from timber and bamboo. The use of iron cramps to join together adjacent stones was known in pre-Islamic India, and was used at the base of domes for hoop reinforcement. The synthesis of styles created by this introduction of new forms to the Hindu tradition of trabeate construction created a distinctive architecture. Domes in pre-Mughal India have a standard squat circular shape with a lotus design and bulbous finial at the top, derived from Hindu architecture. Because the Hindu architectural tradition did not include arches, flat corbels were used to transition from the corners of the room to the dome, rather than squinches. In contrast to Persian and Ottoman domes, the domes of Indian tombs tend to be more bulbous.
The earliest examples include the half-domes of the late 13th century tomb of Balban and the small dome of the tomb of Khan Shahid, which were made of roughly cut material and would have needed covering surface finishes. Under the Lodi dynasty there was a large proliferation of tomb building, with octagonal plans reserved for royalty and square plans used for others of high rank, and the first double dome was introduced to India in this period. The first major Mughal building is the domed tomb of Humayun, built between 1562 and 1571 by a Persian architect. The central double dome covers an octagonal central chamber about 15 meters wide and is accompanied by small domed chattri made of brick and faced with stone. Chatris, the domed kiosks on pillars characteristic of Mughal roofs, were adopted from their Hindu use as cenotaphs. The fusion of Persian and Indian architecture can be seen in the dome shape of the Taj Mahal: the bulbous shape derives from Persian Timurid domes, and the finial with lotus leaf base is derived from Hindu temples. The Gol Gumbaz, or Round Dome, is one of the largest masonry domes in the world. It has an internal diameter of 41.15 meters and a height of 54.25 meters. The dome was the most technically advanced built in the Deccan. The last major Islamic tomb built in India was the tomb of Safdar Jang (1753–54). The central dome is reportedly triple-shelled, with two relatively flat inner brick domes and an outer bulbous marble dome, although it may actually be that the marble and second brick domes are joined everywhere but under the lotus leaf finial at the top.
Early modern period domes
In the early sixteenth century, the lantern of the Italian dome spread to Germany, gradually adopting the bulbous cupola from the Netherlands. Russian architecture strongly influenced the many bulbous domes of the wooden churches of Bohemia and Silesia and, in Bavaria, bulbous domes less resemble Dutch models than Russian ones. Domes like these gained in popularity in central and southern Germany and in Austria in the seventeenth and eighteenth centuries, particularly in the Baroque style, and influenced many bulbous cupolas in Poland and Eastern Europe in the Baroque period. However, many bulbous domes in eastern Europe were replaced over time in the larger cities during the second half of the eighteenth century in favor of hemispherical or stilted cupolas in the French or Italian styles.
The construction of domes in the sixteenth and seventeenth centuries relied primarily on empirical techniques and oral traditions rather than the architectural treatises of the times, which avoided practical details. This was adequate for domes up to medium size, with diameters in the range of 12 to 20 meters. Materials were considered homogeneous and rigid, with compression taken into account and elasticity ignored. The weight of materials and the size of the dome were the key references. Lateral tensions in a dome were counteracted with horizontal rings of iron, stone, or wood incorporated into the structure.
Over the course of the seventeenth and eighteenth centuries, developments in mathematics and the study of statics led to a more precise formalization of the ideas of the traditional constructive practices of arches and vaults, and there was a diffusion of studies on the most stable form for these structures: the catenary curve. Robert Hooke, who first articulated that a catenary arch was comparable to an inverted hanging chain, may have advised Wren on how to achieve the crossing dome of St. Paul's Cathedral. Wren's structural system became the standard for large domes well into the 19th century. The ribs in Guarino Guarini's San Lorenzo and Il Sidone were shaped as catenary arches. The idea of a large oculus in a solid dome revealing a second dome originated with him. He also established the oval dome as a reconciliation of the longitudinal plan church favored by the liturgy of the Counter-Reformation and the centralized plan favored by idealists. Because of the imprecision of oval domes in the Rococo period, drums were problematic and the domes instead often rested directly on arches or pendentives.
In the eighteenth century, the study of dome structures changed radically, with domes being considered as a composition of smaller elements, each subject to mathematical and mechanical laws and easier to analyse individually, rather than being considered as whole units unto themselves. Although never very popular in domestic settings, domes were used in a number of 18th century homes built in the Neo-Classical style. In the United States, most public buildings in the late 18th century were only distinguishable from private residences because they featured cupolas.
Modern period domes
The historicism of the 19th century led to many domes being re-translations of the great domes of the past, rather than further stylistic developments, especially in sacred architecture. New production techniques allowed for cast iron and wrought iron to be produced both in larger quantities and at relatively low prices during the Industrial Revolution. Russia, which had large supplies of iron, has some of the earliest examples of iron's architectural use. Excluding those that simply imitated multi-shell masonry, metal framed domes such as the elliptical dome of Royal Albert Hall in London (57 to 67 meters in diameter) and the circular dome of the Halle au Blé in Paris may represent the century's chief development of the simple domed form. Cast-iron domes were particularly popular in France.
The practice of building rotating domes for housing large telescopes was begun in the 19th century, with early examples using papier-mâché to minimize weight. Unique glass domes springing straight from ground level were used for hothouses and winter gardens. Elaborate covered shopping arcades included large glazed domes at their cross intersections. The large domes of the 19th century included exhibition buildings and functional structures such as gasometers and locomotive sheds. The "first fully triangulated framed dome" was built in Berlin in 1863 by Johann Wilhelm Schwedler and, by the start of the 20th century, similarly triangulated frame domes had become fairly common. Vladimir Shukhov was also an early pioneer of what would later be called gridshell structures and in 1897 he employed them in domed exhibit pavilions at the All-Russia Industrial and Art Exhibition.
Domes built with steel and concrete were able to achieve very large spans. In the late 19th and early 20th centuries, the Guastavino family, a father and son team who worked on the eastern seaboard of the United States, further developed the masonry dome, using tiles set flat against the surface of the curve and fast-setting Portland cement, which allowed mild steel bar to be used to counteract tension forces. The thin domical shell was further developed with the construction by Walther Bauersfeld of two planetarium domes in Jena, Germany in the early 1920s. They consisting of a triangulated frame of light steel bars and mesh covered by a thin layer of concrete. These are generally taken to be the first modern architectural thin shells. These are also considered the first geodesic domes. Geodesic domes have been used for radar enclosures, greenhouses, housing, and weather stations. Architectural shells had their heyday in the 1950s and 1960s, peaking in popularity shortly before the widespread adoption of computers and the finite element method of structural analysis.
The first permanent air supported membrane domes were the radar domes designed and built by Walter Bird after World War II. Their low cost eventually led to the development of permanent versions using teflon-coated fiberglass and by 1985 the majority of the domed stadiums around the world used this system. Tensegrity domes, patented by Buckminster Fuller in 1962, are membrane structures consisting of radial trusses made from steel cables under tension with vertical steel pipes spreading the cables into the truss form. They have been made circular, elliptical, and other shapes to cover stadiums from Korea to Florida. Tension membrane design has depended upon computers, and the increasing availability of powerful computers resulted in many developments being made in the last three decades of the 20th century. The higher expense of rigid large span domes made them relatively rare, although rigidly moving panels is the most popular system for sports stadiums with retractable roofing.
| Technology | Architectural elements | null |
167364 | https://en.wikipedia.org/wiki/Millstone | Millstone | Millstones or mill stones are stones used in gristmills, used for triturating, crushing or, more specifically, grinding wheat or other grains. They are sometimes referred to as grindstones or grinding stones.
Millstones come in pairs: a stationary base with a convex rim known as the bedstone (or nether millstone) and a concave-rimmed runner stone that rotates. The movement of the runner on top of the bedstone creates a "scissoring" action that grinds grain trapped between the stones. Millstones are constructed so that their shape and configuration help to channel ground flour to the outer edges of the mechanism for collection.
The runner stone is supported by a cross-shaped metal piece (millrind or rynd) fixed to a "mace head" topping the main shaft or spindle leading to the driving mechanism of the mill (wind, water (including tide), or other means).
History
The origins of an industry
Often referred to as the "oldest industry", the use of the millstone is inextricably linked to human history. Integrated into food processes since the Upper Palaeolithic, its use remained constant until the end of the 19th century, when it was gradually replaced by a new type of metal tool. However, it can still be seen in rural domestic installations, such as in India, where 300 million women used hand mills daily to produce flour in 2002.
The earliest evidence for stones used to grind food is found in northern Australia, at the Madjedbebe rock shelter in Arnhem Land, dating back around 60,000 years. Grinding stones or grindstones, as they were called, were used by the Aboriginal peoples across the continent and islands, and they were traded in areas where suitable sandstone was not available in abundance. Different stones were adapted for grinding different things and varied according to location. One important use was for foods, in particular to grind seeds to make bread, but stones were also adapted for grinding specific types of starchy nuts, ochres for artwork, plant fibres for string, or plants for use in bush medicine, and are still used today. The Australian grindstones usually comprise a large flat sandstone rock (for its abrasive qualities), used with a top stone, known as a "muller", "pounder", or pestle. The Aboriginal peoples of the present state of Victoria used grinding stones to crush roots, bulbs, tubers, and berries, as well as insects, small mammals, and reptiles before cooking them.
In Ancient history
Careful examination of Paleolithic grinders (pebbles, wheels, mortar and pestle, etc.) enables us to determine the nature of the action exerted on the material and the gesture performed; the function of the tool can then be specified, as well as the activity in which it participated.
Neanderthal people were already using rudimentary tools to crush various substances, as attested by the presence of rudimentary grinders at the end of the Mousterian and millstones in the Châtelperronian. From the Aurignacian period onwards (around 38,000 years ago), Cro-Magnon man regularly used millstones, elongated grinders, and circular wheels. From the Gravettian period onwards (circa 29,000 years), this equipment became more diversified, with the appearance of new types of tools such as millstones and pestle grinders.
At the end of the Palaeolithic, millstones from Wadi Kubbaniya (Middle East, 19,000 B.C.) were involved in dietary processes and associated with residues of tuberous plants, which were known to require grinding before consumption, either to extract their toxins (Cyperus rotundus, nutsedge), or to remove the fibrous texture that would make them indigestible (Scirpus maritimus). The rhizomes of ferns and the peel of the fruit of the doum palm, also found on this site, benefit from being ground to improve their nutritional qualities; they thus complemented the meat diet of hunter-gatherers. Grinding barley or oat seeds was practiced at the end of the Upper Palaeolithic (Franchthi) or the Kebarian (Ohalo II, 19,000 BC).
As tools improved, the material was increasingly finely ground, but only when it became a real powder could we speak of grinding. Thus, the men of the European Upper Paleolithic were already dissociating grinding and milling, as attested by the appearance at this time of the first grinding slabs used with grinders or millstones. While there is no evidence of the milling of wild cereals in the early Upper Paleolithic, at least in Europe, there is no reason not to believe that other plant matter (acorns, nuts, hazelnuts, etc.) and animal matter (fat) were already being ground into paste before cooking. Similarly, it's likely that millstones were being used at this time for technical purposes, to crush mineral substances (dyes) and certain plant or animal fibers for technical use.
In the Mesolithic and Neolithic eras, with the domestication of plants, much larger, fully formed grinding, pounding, and milling equipment appeared. From the Natufian onwards, several types of millstones can be found side by side, such as the deep "trough-shaped" millstone or the flat millstone, indicating a specialization of their function. In the Near East, the pestle-grinder began to be developed in the Kebarian and Natufian periods. It gradually evolved into the heavy, generally wooden, thrown pestle. This type of equipment is still used today in many regions, such as in Ethiopia for milling millet.
The appearance of flat, elongated millstones in the Natoufian period (Abu Hureïra on the Euphrates) dates back to the 9th millennium BC. They feature larger active surfaces and mark the emergence of a new gesture, that of grinding from front to back, with both hands, which implies a new posture for the body, kneeling in front of the millstone. The appearance of large, asymmetrical, shaped millstones (Mureybet, Sheikh Hassan, circa 10,000 BP) led to the "saddle-shaped" millstones still known today as the metate.
In the rest of the world
At the Tell Abu Hureyra archaeological site, as early as the 8th millennium BC, women's skeletons show traces of osteoarthritis in the knees, spinal deformity and deformation of the first metatarsal, pathologies associated with long periods of bending while grinding, supporting the theory that early humans practiced a sexual division of labor.
In India, millstone (Chakki) were used to grind grains and spices. These consist of a stationary stone cylinder upon which a smaller stone cylinder rotates. Smaller ones, for household use, were operated by two people. Larger ones, for community or commercial use, used livestock to rotate the upper cylinder. Today a majority of the stone flour mills (Atta Chakki) are equipped with lower stone rotating and upper stone stationary millstones also called Shikhar Emery Stones which are made from abrasive emery grains and grits, with a binding agent similar to Sorel Cement. These stones are made from two types of emery abrasives - Natural Jaspar Red Emery or Synthetic Calcined Bauxite Black Emery.
In Korea, there were three different millstones, each made from different materials, serving other purposes, such as threshing, grinding, and producing starch. Generally, the handle of a millstone in Korea was made from an ash tree, the process for making a handle from the ash tree was known as "Mulpure-namu". To ensure that everything is "all right" with the creation of a millstone, a mason within ancient Kora offered food and alcohol in a ritual.
Millstones were introduced to Britain by the Romans during the 1st century AD and were widely used there from the 3rd century AD onwards.
In 1932-1933 in Ukraine, during the man-made famine known as Holodomor, the Soviet authorities prohibited the use of millstones, claiming that a millstone is a "mechanism for enrichment" (which was a negative term in Soviet communist ideology). This forced Ukrainian villagers to hide their manually-operated millstones and use them secretly during the famine. In response, Soviet authorities regularly searched villages for "illegal" millstones and destroyed them. In 2007, the people of Victorivka village in Cherkasy Oblast built a monument using the millstones they had managed to hide and save from the Soviet plunder during the Holodomor.
Different techniques: grinding, crushing, milling
The preparation of vegetable products (roots, tubers, almonds, leaves, etc.), animal products (marrow, tendons, etc.), or mineral products (ochre) by grinding or milling, for consumption or technical use, has existed for several dozen millennia. Unlike crushing, in which a hard envelope such as a shell or bone is broken open to recover its contents, in this case, the aim is to reduce a much softer material to a powder or paste.
Depending on the place and time, millstones were used for "dry" grinding: in the manufacture of flour, sugar, or spices, but also for the preparation of kaolinite, cement, phosphate, lime, enamel, fertilizer, and other minerals. The milling operation can also be carried out "wet", as in the case of durum wheat semolina, nixtamal, or the grinding of mustard seeds. During preparation, some raw materials produce a naturally fluid paste, as in olive crushing or cocoa grinding.
In his typology of percussion, André Leroi-Gourhan defines several families of gestures, three of which are essential for the preparation of raw materials:
Crushing gestures involve vertical percussion using a heavy, elongated object in the manner of the African pestle. This gesture is also used by the trip hammer to make paper pulp, or in forging;
Milling gestures, using percussion, which are performed in a circular, disordered, or back-and-forth motion on a millstone;
Grinding gestures, in which the movements are roughly circular and occasionally vertical, thus combining a thrown percussion and a percussion posed, are qualified here as diffuse. This is the case with the contemporary mortar-pestle system.
Milling systems
Until the invention of the watermill, mills operated using "strength-powered", i.e. the force of animals or people.
The metate
The metate is a nether millstone for domestic use, for grinding corn. It has been used for several thousand years (around 3000 BC) in the cultural area of Mesoamerica, and its name comes from the Nahuatl "metatl".
Today's millstones are monolithic, usually made of basalt, apodous, or tripod, rectangular, and slightly concave on the grinding surface. These millstones are associated with a two-handed wheel, called a "mano", whose size generally exceeds the width of the millstone and which is driven in an alternating rectilinear motion. On tripod wheels, one of the legs is slightly higher than the other two, giving the whole unit an inclination, with the user standing in front of the highest part.
The manufacture of millstones was essentially a male occupation. In pre-Hispanic times, millers used only stone tools, a practice that persisted in some villages until the mid-20th century. The use of metal tools, probably inherited from building stonemasons, made it possible to use the hardest basalts, resulting in millstones with a lifespan of over thirty years. While the manufacture of apod millstones from blocks of stone naturally polished in a riverbed was once within the reach of many farmers, the production of tripode metates requires specialized craftsmanship.
Grinding plays a key role in Mexican cuisine. Dry grinding is possible, but very few recipes are produced in this way: roasted coffee, roasted corn or beans, salt, sugar loaves, and cocoa are ground into powder. But most preparations require grinding with water. Fruits are ground into juices, beans or boiled vegetables, ingredients are added to various spicy sauces and, above all, corn is used to make the tortillas that form the basis of every meal. The latter are made from nixtamal, i.e. dry corn kernels cooked with lime, then rinsed with water, which softens the kernels and produces a paste. Maize or nixtamal can be ground for preparations other than patties: tamales, pozole, atole, pinole, and masa, with variations in the fineness of the grind depending on the use.
The metate was used exclusively by women, and in Mixtec lands, the place where the millstone is located was a space reserved for women. A couple often acquires, or is given, a millstone when they set up home. This acquisition represents a major expense in the life of a Mixtec peasant, as evidenced by the wills of nobles and wealthy peasants from the 16th to 18th centuries, which included metates.
Daily tortillas are made from sufficiently moistened corn dough, which, unlike flour, cannot be preserved. This technical characteristic no doubt explains why domestic metates were not replaced centuries ago by mills, as they were in Europe. During the wars of the 19th century and the Mexican Revolution of 1910, Mexican armies were accompanied by women and metates to ensure the stewardship of the country; the Spanish conquest did not replace tortillas with bread - quite the contrary. At the end of the 19th century, the owners of the large plantations introduced motorized corn mills, which freed up female labor for the fields. From 1920 onwards, electric mills appeared in the countryside, owned by municipalities, cooperatives or private individuals. However, still in use today, nether millstones are still part of Mexico's rural heritage.
The Olynthus mill
The town of Olynthus was destroyed in 348 BC by Philip II of Macedonia, and the name "Olynthus millstone, Olynthus grinder, Olynthus mill" has come to be attached to this type of mill, which represents a genuine technical revolution. In 1917, the Greek Konstantinos Kourouniotis elucidated the workings of the hopper millstone, which played an important role in ancient Greece.
In the Olynthus mill, the nether millstone(4) is rectangular, resting on a table (5); it measures between 0.42 m and 0.65 m in length, 0.36 m to 0.54 m in width and 0.08 to 0.25 m in thickness. The grinder, which forms the upper millstone (common millstone (3)), is usually rectangular, sometimes oval, with a central hopper parallel to the long sides, designed to receive the grain to be ground. The mill is capped by a horizontal axle attached to a pivot (1) on one side of the table, the other end being operated by a worker who moves the lever (2) back and forth horizontally. The Olynthus mill thus shows the beginnings of mechanization, with millers now standing on their feet, making work easier.
This type of mill certainly appeared as early as the beginning of the 5th century BC. Its use was widespread throughout the Greek civilization in the 4th century B.C., from Macedonia to the Peloponnese, and was adopted as far afield as the islands of Anatolia, Egypt, and modern-day Syria. It continued into the 1st century B.C., and sometimes even later, as the excavations at the Agora in Athens suggest. The importance of this mill type for the Greek world was confirmed by the discovery, in 1967, of 22 hopper mills in the cargo of a ship wrecked off Kyrenia, dated to the end of the 4th century BC. Increasing demand undoubtedly led to standardization in manufacturing and specialization of production centers. For example, flat Argolidian millstones, made of andesite and rhyolite, were produced from local quarries (Isthmus of Corinth, Saronic Gulf), while grinders came from more distant quarries (islands of Nysiros, Milos).
The use of this type of mill was not limited to grinding cereals, as the finds from Thasos or Lavrio suggest: it was also used to grind ore, so as to calibrate it for subsequent selection by washing. It may even have appeared in the mines of Pangaion Hills. The text by Agatharchides on the gold mines of Egypt in the 1st century B.C., transmitted by Photios and Diodorus, mentions a mill with a lever:
"Women and older men then receive this ore crushed to the size of peas, throw it into the millstones, in numerous lines, two or three people standing at each lever and grind it." Photius' version specifies "on either side" of the lever.
The rotating millstone
It's also known as a "hand millstone", "arm millstone" or "moulinet", and in Latin as a "molendinum bracchis" or "molendinum manuale".
According to de Barry, the oldest circular stone millstone was unearthed in the ruins of the town of Olynthus: it was the millstone of an oil mill, not a flour mill. Historians Marie-Claire Amouretti and Georges Comet point out that these millstones pre-date the earliest known examples of circular grain mills. So it was probably through oil production that the first rotary crushing machine was introduced. Cereals and other fruits and seeds followed.
The oldest rotating millstone are thought to have originated in Spain 2,500 years ago(600 BC - 400 BC). It seems that the rotating millstone spread at the end of the 5th century BC from Spain, and that it was directly derived from attempts to perfect the Olynthus mill. André Leroi-Gourhan states that "the transformation of rectilinear reciprocating motion into circular-continuous motion leads to another form of milling". Some authors do not agree on its geographical origin, located for some "towards Carthage and the Syrian-Egyptian region", "simultaneously in Spain(fr) Rafael Frankel: The Olynthus Mill, Its Origin, and Diffusion: Typology and Distribution, in American Journal of Archaeology, vol. 107, no. 1 (2003), and England" for others, and even though it was found in China in the 1st century BC. According to L.A. Moritz, the rotating grain mill only appeared in the first century BC. He bases his demonstration on Latin texts, in particular those of Plautus and Cato, and fixes the introduction of this type between the time of Plautus' death in 184BC and the composition of De agri cultura, around 160 BC.
Several types of mills can be identified in Europe, depending on the morphology of the millstones used in these manually operated rotary mills.
The Celtic mill is made up of massive millstones, with a conical external profile and virtually flat active stone surfaces.
In Dacia, between the 1st century B.C. and the 1st century A.D., the Celtic mill evolved into an intermediate type with two millstones superimposed and integrated, featuring a three-lobed feed opening. The more sharply tapered inner surfaces of the millstones ensured that the grains flowed more quickly under the effect of gravity, but the quality of the flour obtained remained mediocre. On the other hand, the effort required to operate the current millstone is considerable. The profile of the millstones makes them more difficult to cut, imposes a standardization of the millstones, and explains their diffusion and maintenance in a given region. Some examples feature flatter wheels, with a much reduced taper, which reduces the stone mass. The speed of rotation became higher, providing a greater gyroscopic effect, but also requiring the installation of a system of claws fixed with molten lead on the upper side of the movable wheel, to hold it in place around the pivot.
Romanization led to the widespread use of hand mills, which were perfected in terms of volume by increasing diameter and reducing height and weight. The profile of the millstones became flatter, and a number of improvements were introduced, such as an upper wedge to center the movable wheel on the pivot. A device for adjusting the distance between the millstones (the anille) also appeared, enabling grinding quality to be controlled (1st century B.C.), and radii cut into the millstone could accentuate the natural abrasiveness of the stone. Later developments, such as the installation of the double lever or the use of a crank fixed to the center of the millstone (14th - 15th centuries), meant that this type of hand mill was used in the countryside until the 20th century.
Because they wear more quickly, this type of millstone requires a stricter selection of stones, among which basalt has a privileged place. Most of the stone used in Roman times seems to have come from just a few quarries. In France, millstones from Cap d'Agde supplied Languedoc and Provence; further north, quarries from the Massif Central (Volvic) supplied a vast territory stretching from Aquitaine to the Helvetic valleys; finally, from the Saône valley to the German border, millstones came mainly from Eifel quarries (Mayen).
In Europe as a whole, the hand mill remained the main milling method until the end of Antiquity, and then throughout the Middle Ages; it only began to give way to the advances of water and then wind mills.
The Pompeian mill or "blood" mill
With a diameter limited to the reach of an arm's movement, i.e. 40 to 70 cm, the hand mill could only produce a limited quantity of flour and was therefore essentially reserved for domestic use. By increasing the diameter and, above all, the height of the meta (nether millstone) and the catillus (runner millstone), the Romans were able to overcome this constraint with the animal-drawn Pompeian mill, also known as the "blood" mill.
In this mill, the nether millstone is conical at the top and the runner millstone looks like an hourglass, with its lower half covering the conical top of the nether millstone. The upper part of the runner millstone acts as a funnel, and a slight gap is maintained between the two millstones. The running wheel pivots around a wooden axle embedded in the standing wheel, and it is thanks to its suspension on this axle that the gap between the two wheels is ensured. This type of millstone could be powered either by two or four men, or by animal rides, hence its name mola asinaria, literally "donkey mill".
An example of this type of millstone can be found as early as the Classical era, used to grind ore in the Laurion mines, although it did not overtake the less efficient reciprocating millstone. Despite its qualities, it didn't really spread throughout the Roman world until later. They were found throughout the Mediterranean basin, but never in very large numbers, except in Italy. Their very high cost - 1,250 denarii in the Late Roman period, compared with 250 denarii for hand millstones - meant that they were only used by millers and bakers. In Gaul, millstones are known from Lyon, Saint-Raphaël, Paris, Amiens and Clermont-Ferrand, all fashioned from basalt from the Eifel, Volvic or Cap d'Agde.
During the Late Antiquity, the donkey mill retreated, probably disappearing after the 5th century as a result of the expansion of the watermill, then the windmill, except in Sardinia, where it remained until the 20th century.
The Roman trapetum
The Hellenistic period also saw the appearance of the olive crusher, which the Romans called the trapetum. Legend has it that it was invented by Aristaeus, and excavations at Olynthus have revealed examples dating back to the 5th century BC.
The trapetum was precisely described by Cato the Elder, who gave us the technical names of all its parts. Excavations at Stabies, Pompeii, the villa at Boscoreale and in Roman Africa show that the system was widely used in ancient Rome and disappeared with it.
The trapetum consists of two plano-convex millstones (3, orbes), standing vertically, supported by a horizontal axis rotating around a vertical pivot (1, columella). This pivot rests on a short stone column (milliarium) at the center of a large hemispherical mortar (4). The lying millstone is a stone vat (4, mortarium) whose walls follow the external profile of the two common millstones. The orbs can move in a circular motion inside the mortarium, and are set in motion by the action of two wooden handles (2, modioli). Wooden wedges (orbiculi) inserted between the milliarium and the columella are used to adjust the height of the orbs above the bottom of the vat. In this system, the olives are not crushed under the millstone, but between the millstone and the sides of the vat. As in the previous model, a gap was maintained between the two millstones. The resistance offered by the fruit forces the stone half-spheres to turn slightly on their axis; the two movements combine and the pressure is exerted only moderately, without breaking the stones, which would give bad taste. The resultant pulp could then be subjected to the action of a press to collect the oil.
Millstones of southern Morocco
A melting pot of African, Eastern and Mediterranean civilizations, Morocco has preserved tools and techniques from different eras.
The Volubilis site, located in Mauritania Tingitana (northeastern Morocco), features grain and olive mills from the Roman period (1st century-2nd century). These mills consist of a truncated cone-shaped standing millstone and a convex grinding ring to which the wooden machinery is connected, apparently operated without the aid of animal power. In this arrangement, the grinding ring is fitted onto the lying millstone. The Volubilitan olive millstone differs from the grain millstone by having oblique striations on the truncated surface of the lying millstone and on the inside of the grinding ring. Columella asserts that, to extract the oil, the millstones (molae) are more useful than the crusher (trapetum), as they can be lowered or raised according to the size of the fruit, so as to avoid crushing the stone.
A second type of olive mill can be found on the same site, and consists of a monolithic vat on which a fluted drum turns around a vertical mast like the section of a column. This type of mill is more common and can be found on many sites, even in recent times.The argan tree is a woodland species endemic to southwest Morocco. The technical environment of the argan mill covers its range. It's a stone hand mill used to grind roasted kernels and almonds.
It stands out from the grain mill thanks to the truncated cone shape and greater height of its runner millstone (agurf wuflla), as well as the presence of a spout (abajjr or tilst) and a pouring spout (ils) on the nether millstone (agurf u wadday). At the center of the nether millstone is a short pivot (tamnrut) made of argan wood, around which the upper millstone rotates, pierced by an eyelet (tit n tzrgt) into which one or two handfuls of kernels are inserted. The circular movement is interrupted to remove the kernels after the millstone has been lifted. The whole unit can be raised on stones welded together in a "bakehouse"-style architecture, allowing embers or argan shells to warm the unit, thus facilitating grinding in winter.
Chronology of milling systems
Mortars and pestles have survived the centuries and are predominant for barley in Greece, starch in Italy, and millet in Africa. They slowly became marginal in some regions, but did not disappear. In classical times, they were still widely represented in Greece and were still used for hulling cereals, even if the advent of adjustable millstones meant that they could now be ground. The advance of naked wheat, particularly common wheat, in Italy and Egypt made them less useful, but they were still mentioned in the Late Roman Empire, in Roman Egypt, and in the monastic rule of Saint Isidore. With the arrival of maize, they were once again used in certain regions.
A first typology of milling systems can be drawn up according to the driving force used; a complementary approach will look at the social context in which the mechanism is implemented.
According to Diocletian's edict, the "blood" mill costed six times more than the hand mill, and the watermill eight times more. The latter therefore competed mainly with the "blood" mill, and took almost three centuries to supplant it. This was also the time it took the "blood" mill to supplant the hopper mill, and the hopper mill to supplant the flat millstone.
It seems that the watermill originated in the Eastern Mediterranean. An inscription from the Phrygian town of Orcistus, which praised the advantages of its site in order to retain its privileges, states that it possesses "thanks to the slope of the waters flowing through it, a large number of watermills". At the beginning of the Christian era, the watermill was still a novelty in the western Mediterranean, and Vitruvius classed it with irrigation machines. This type of mill proved ill-suited to the design of Pompeian millstones. In Caligula's time, "blood" mills were still dominant, as Apuleius describes. Over the course of the 1st and 2nd centuries, the watermill slowly spread to a wide variety of provinces: Brittany, Gaul, and Africa, where the rotary millstone was often more widespread than the Pompeian mill. Over the course of the 4th century, the watermill slowly replaced the "blood" mill in Rome itself, becoming the predominant mill in the 2nd century. While there were some spectacular achievements in cities, such as the Barbegal mill in Arles, the watermill seems to have spread more slowly to rural villas, as Palladius indicates.
We don't really know how the Greeks processed their cereals between the 1st and 4th centuries. The "blood" mill was undoubtedly widespread, as attested by the legend of Lucius' donkey, borrowed by Lucian of Samosata and Apuleius. The coexistence of several types of milling seems to be the rule in the Aegean world, and the codification of Diocletian's edict in the 2nd century, which established three types of mills (hand, blood, and water), can still be found in the Byzantine rural code in the 6th century, and even in travellers' accounts in the 17th century.
In the Mediterranean, watermills, which depended on water supply, progressed especially when they had a complement to avoid the vagaries of the dry season. In this context, windmills undoubtedly contributed to the spread of watermills as early as the 11th century in regions such as Provence and the Greek islands.
Other milling systems
Assembled millstones
In a study of millstones in Flanders from the Middle Ages to the French Revolution, Jean Bruggeman points out that medieval millstones were always monolithic, that black basalt stones were still monolithic in later centuries, and that white stones remained so until the 18th century. However, "gisantes" were sometimes made up of several irregularly-shaped pieces. These were bound in plaster, encased in an iron or wooden casing, and sometimes placed on a bed of cemented bricks.
In fact, the invention of millstones made of pieces, i.e. an assembly of several stones or tiles, remains difficult to date precisely.
In the 15th century, the river trade passing through Paris was strictly controlled by the Hanseatic League of water merchants; "French companies" had to inform the clerks of the names of their partners, the city of destination, and the nature and value of the cargo. Thus, on May 3, 1452, a Rouen merchant named Robert Le Cornu declared that he was bringing to Normandy one or more boats loaded with 35 millstones, 5 blinkers, 100 carreaux and a tombstone.
Various texts provide clues to the manufacture of millstones in the 17th century. On March 10, 1647, Jacques Vinault "sold 3 rounds of grinding stone" to Pierre Bailly. On March 26, 1652, another text evokes the difficulties of a millstone assembly site, with a "lack of wood to cook the plastre quy is not in sufficient quantity to plastrer and put in the places where it is necessary, joinct aussy that there is stone to suffice to make the millstones". On July 7, 1680, Sr Delugré "made a deal with Claude Duvau and Jullien Boullmer, stone molders [...] to supply them with 2 molds of molding stone and plaster to make the millstones [...] made and perfect to make flour".
According to Dorothée Kleinmann, "economic milling" and its improvements really took off at the end of the 18th century. This led to the development of stone quarrying and millstone production in new regions such as Cinq-Mars-la-Pile and Domme, where "millstones are always formed by joining several pieces together; there are no blocks large enough to make masses from a single piece". In these locations, it seems that at the beginning of the eighteenth century, millstone was not yet quarried, preferring to salvage scattered blocks from woods, fields and vineyards, which sometimes considerably increased their value.
Once the millstone blocks have been transported to the site and "peeled", the manufacturer selects the stones required for the millstone. The different pieces are classified according to their quality, taking into account hardness, grain, porosity, and color. At this stage, it is also necessary to take into account the milling system used in the country of dispatch, and the type of wheat produced in the region. Once the choice has been made, production begins with the center or "boitard", which is usually made in one piece. This must be very solid, especially for the current millstone, as it is at this level that the casing on which the millstone is suspended is fixed. Around the boitard, the tiles are arranged and fixed with plaster or cement, and chiseled to fit together sufficiently. A wheel of this type is generally made up of two to six quarters. "When the job is done and the blocks match, the worker adjusts them by cementing them with Portland cement, sometimes with a paste of Spanish white and oil that hardens with age, and clamps the whole with iron hoops". On the other side of the working surface, the back of the millstone, or "counter-molding", is surrounded by a strip of sheet metal serving as temporary formwork. To give the millstone the necessary weight and thickness, it is reloaded with small stones embedded in fine concrete, into which are inserted cast-iron balancing boxes, which may contain lead if necessary.
Edge mill
Horizontal use of the millstone is generally associated with milling. When the millstone is "upright", i.e. on its edge, it is used for grinding, crushing, or milling operations. In this configuration, the nether millstone is fixed by its eyebolt to a vertical mast located centrally on the nether millstone which acts as a pivot. Depending on the size of the installation, and to maintain the verticality of the mast, its upper part may be attached to a beam overhanging the mill. The current millstone is rotated either " by means of force", or more often, in a riding hall. In this way, the mill is driven by a double movement, turning on itself while pivoting around the mast, as in the Roman trapetum. In this type of device, the millstone is monolithic or made up of a paved or even masonry surface. Depending on the product to be processed, the millstone may be slightly concave, with a rim around the periphery to avoid dispersing the crushed material.
Materials
In common language, "millstone" refers to any type of rock that may have been used in a mill, whereas in the geological sense, true "millstone" is defined as a siliceous accident in a sedimentary basin.
The type of stone most suitable for making millstones is a siliceous rock called burrstone (or buhrstone), an open-textured, porous but tough, fine-grained sandstone, or a silicified, fossiliferous limestone. In some sandstones, the cement is calcareous.
On a historical scale, it seems that most types of rock have been used in milling. Among the sedimentary rocks of potential use are limestone and sandstone. The latter soon emerged as the stones of choice, with porosities that make them easy to shape and extraction that can be facilitated by bedding between clay interbeds. It wasn't until the 15th century that millstones stricto sensu began to be quarried, a practice that became widespread in the 18th century.
Deep-lying magmatic rocks, such as granite, are widespread, but were ultimately little used for millstone manufacture, probably due to their low porosity and the presence of black mica, which rapidly alters to form iron oxides. Basalt was widely used in Germany (Eifel), but is not widespread in France, with the exception of the Évenos volcano in Provence; other examples include the basalt millstones of the Agde volcano, and those of the Sainte Magdeleine volcano at La Môle, not far from Cogolin.
Limestones are generally porous, with medium to low compressive strengths, so "classic" limestones seem to have been quickly abandoned in favor of better stones. Although very fine-grained, limestone polishes very quickly and needs to be re-cut frequently to keep the stones rough. Some sandstone limestones (Saint-Julien-des-Molières limestone) can have very good compressive strength (over 100 MPa).
Sandstone rocks (sandstones and microconglomerates up to 1 cm) are the preferred material for millstones. Analysis of production sites shows that they can be limestone-cemented sandstones, silica-cemented sandstones, or even slightly metamorphosed sandstones.
Limestone-cemented sandstones, such as Alpine molasses, are widespread. They have medium porosities (6 to 12%), medium compressive strength (35 MPa), often coarse grain size, and variable silica content.
A very good millstone is generally rich in silica: the higher the percentage, the stronger the rock, silica being the hardest common mineral on the Earth's surface. The same is true of sandstone with siliceous cement, where the percentage of silica is high because both the grains and the cement are siliceous in nature. However, they don't necessarily make good millstones, like Vosges sandstone, which has a rather fine grain and traces of iron.
Slightly metamorphosed sandstones often have very low porosity (around 2%) due to tectonic compression, resulting in somewhat compact sandstones. Compressive strength can be very high (over 100 MPa), as in the case of Arros sandstone, despite an average silica percentage.
Finally, millstones in the geological sense are porous stones, which play a role not only for cutting, but also undoubtedly for grinding. These include stones such as those from La Ferté-sous-Jouarre, with high porosity (20%), compressive strength of 80 MPa, and medium grain. Corfélix stones have exceptional compressive strength on the order of solid basalt (190 MPa), 98% silica, fairly coarse grain, and medium to high porosity.
In a nutshell, for the rock mechanics, a good millstone has three fundamental characteristics:
insensitivity to alteration, whether through dissolution (gypsum), the action of moisture (as in the case of limestone), or the chemical action of water, as in the case of granite mica or Vosges sandstone (presence of iron) ;
heterogeneity on a millimeter and centimeter scale is a quality that provides crushing asperities and evacuating channels, like hard punches held together by a slightly less hard but tenacious cement, which is not generally a characteristic of limestone;
high porosity, which facilitates quarrying, as it is easier to introduce cutting tools into porous rock than into solid rock, but also undoubtedly facilitates grinding.
The following table presents some examples of geological and petrophysical data obtained from sites used for the production of millstones:
Millstones used in Britain were of several types:
Derbyshire Peak stones of grey Millstone Grit, cut from one piece, used for grinding barley; imitation Derbyshire Peak stones are used as decorative signposts at the boundaries of the Peak District National Park. Derbyshire Peak stones wear quickly and are typically used to grind animal feed since they leave stone powder in the flour, making it undesirable for human consumption.
French buhrstones, used for finer grinding. French Burr comes from the Marne Valley in northern France. The millstones are not cut from one piece, but built up from sections of quartz cemented together, backed with plaster and bound with shrink-fit iron bands. Slots in the bands provide attachments for lifting. In southern England the material was imported as pieces of rock, only assembled into complete millstones in local workshops. It was necessary to balance the completed runner stone with lead weights applied to the lighter side.
Composite stones, built up from pieces of emery, were introduced during the nineteenth century; they were found to be more suitable for grinding at the higher speeds available when auxiliary engines were adopted.
In Europe, a further type of millstone was used. These were uncommon in Britain, but not unknown:
Cullen stones (stones from Cologne), a form of black lava quarried in the Rhine Valley at Mayen near Cologne, Germany.
Lava stones from Orvieto (Italy), Mount Etna and Hyblaean Mounts (Sicily), and Pantelleria island, were used by the Romans.
Patterning
The surface of a millstone is divided by deep grooves called furrows into separate flat areas called lands. Spreading away from the furrows are smaller grooves called feathering or cracking. The grooves provide a cutting edge and help to channel the ground flour out from the stones.
The furrows and lands are arranged in repeating patterns called harps. A typical millstone will have six, eight or ten harps. The pattern of harps is repeated on the face of each stone, when they are laid face to face the patterns mesh in a kind of "scissoring" motion creating the cutting or grinding function of the stones. When in regular use stones need to be dressed periodically, that is, re-cut to keep the cutting surfaces sharp.
The major challenge is to limit the heat generated by the pressure of the millstones on the ground flour. In addition to denaturing the flour (browning), this overheating, and any sparks generated by the rubbing of the stones, could cause an explosion in the mill, whose atmosphere is charged with fine flour particles. A complex system of spokes had to be devised to ventilate the gap between the millstones and, at the same time, progressively push the material from the eyelet to the peripheral rabbet. Wheat millstones have long been used to grind cereals in a single pass. We had to find the best way of extracting the flour and cleaning the bran, ensuring that it was unbroken and free of flour.
Millstones need to be evenly balanced, and achieving the correct separation of the stones is crucial to producing good quality flour. The experienced miller will be able to adjust their separation very accurately.
For the manufacture of the millstone, the customer had to specify the diameter, the size of the eye and the direction of the furrows. Occasionally, a miller was mistaken about the direction of the furrows, as extracts from correspondence testify: "You tell us that your top wheels must be rifled to turn counter-clockwise. We therefore understand that these millstones must be radiused to turn counter-clockwise, i.e. in the opposite direction to that in which the sun seems to revolve around the earth". Despite all the precautions taken at the time of ordering, it sometimes happened that, in the event of a dispute, we were obliged to travel to change the direction: "we sent a workman a hundred leagues from here to unravel, straighten and re-radiate two pairs of millstone; the profit is eaten twice".
Between the furrows, the millstone is covered with fine grooves called feathering or cracking, also cut into the stone, to make it more aggressive and thus better able to grind the grains. They run along the edge of the millstone, over a width of around 15 cm, to form the rabbet. Regularly, the furrows need to be redone with a special hammer: the millstone is said to need to be rhabillaged or rebatted. This operation must be carried out after grinding around 50 tons of wheat. Special steel hardening techniques enabled certain companies, such as Kupka in Germany, to produce picks and hammers that were particularly appreciated by millstone reworkers. During the operation, the light blows emitted a cloud of siliceous dust that could cause lung ailments in specialized workers. In addition, the cutting of millstones led to professional tattoos, with steel particles from the tools embedded under the dermis. Eye diseases were also common.
Grinding with millstones
Grain is fed by gravity from the hopper into the feed-shoe. The shoe is agitated by a shoe handle running against an agitator (damsel) on the stone spindle, the shaft powering the runner stone. This mechanism regulates the feed of grain to the millstones by making the feed dependent on the speed of the runner stone. From the feed shoe the grain falls through the eye, the central hole, of the runner stone and is taken between the runner and the bed stone to be ground. The flour exits from between the stones from the side. The stone casing prevents the flour from falling on the floor, instead it is taken to the meal spout from where it can be bagged or processed further.
The runner stone is supported by the rind, a cross-
shaped metal piece, on the spindle. The spindle is carried by the tentering gear, a set of beams forming a lever system, or a screw jack, with which the runner stone can be lifted or lowered slightly and the gap between the stones adjusted. The weight of the runner stone is significant (up to ) and it is this weight combined with the cutting action from the porous stone and the patterning that causes the milling process.
Millstones for some water-powered mills (such as Peirce Mill) spin at about 125 rpm.
Especially in the case of wind-powered mills the turning speed can be irregular. Higher speed means more grain is fed to the stones by the feed-shoe, and grain exits the stones more quickly because of their faster turning speed. The miller has to reduce the gap between the stones so more weight of the runner presses down on the grain and the grinding action is increased to prevent the grain being ground too coarsely. It has the added benefit of increasing the load on the mill and so slowing it down. In the reverse case the miller may have to raise the runner stone if the grain is milled too thoroughly making it unsuitable for baking. In any case the stones should never touch during milling as this would cause them to wear down rapidly. The process of lowering and raising the runner stone is called tentering and lightering. In many windmills it is automated by adding a centrifugal governor to the tentering gear.
Depending on the type of grain to be milled and the power available the miller may adjust the feed of grain to stones beforehand by changing the amount of agitation of the feed-shoe or adjusting the size of the hopper outlet.
Milling by millstones is a one-step process in contrast with roller mills in modern mass production where milling takes place in many steps. It produces wholemeal flour which can be turned into white flour by sifting to remove the bran.
Symbolism
Symbolism in the Bible
Millstones were often essential objects within a community. For that reason, they gain multiple symbolic meanings and symbolism within mythology, folklore, and the Bible. The Hebrew Bible admonishes (Deuteronomy 24:6): "No one shall take a lower millstone, nor an upper millstone, in pledge [for the payment of a debt], for that would be tantamount to taking away a life in pledge." The rabbis have explained that not only a millstone cannot be taken as security for a pledge, but anything in which the life of man depends cannot be taken as security for a pledge.
The Bible heavily utilized millstone symbolism within its various proverbs. A common one is the millstone's proverbial designation of something as a great weight, as seen in Matthew 18:6
Likewise, due to the exhausting physical labor associated with the earliest millstones, they were symbolic of hard work and accredited as a menial task given to the lowest form of a laborer. This is not the only symbolic meaning of millstones within the Bible; millstones were also used as a symbol of civilization, prosperity, and comfortable living.
Other symbolism
Out of the Bible, the millstone can be seen as a symbol of transformation, death, and rebirth. This is due to the strenuous amount of work and effort that goes into utilizing a millstone to grind grain into flour. Other symbolic meanings associated with millstones include fertility and abundance. In Korea, a practice existed in which the husband would use a millstone while his wife was in childbirth, hoping that he could share her pain. In both the Bible and folklore, the millstone can be associated with punishment. In some instances and stories, a millstone is used to harm an individual for their behavior. Examples of millstones being used to punish individuals can be seen in "The Juniper Tree" and Judges 9:53, where one is used to kill Abimelech by tossing it on his head.
Heraldry
In heraldry, as a demonstration of military bravado, a millstone features as the heraldic crest of John de Lisle, 2nd Baron Lisle (c.1318-1355), one of the founder knights of the Order of the Garter, as shown on his garter-plate in St George's Chapel, Windsor: A mill-stone argent pecked sable the inner circle and the rim of the second the fer-de-moline or. Thus symbolising super-human strength necessary to support such a weight atop his helmet.
In its more basic heraldic form it is a charge symbolising industry. The fer-de-moline ("mill-iron") or millrind, which attaches to the millstone and transfers to it the torque of the drive-shaft, is also a common heraldic charge, used as canting arms by families named Mills, Milles, Turner, etc.
Photo gallery
| Technology | Industrial machinery | null |
167427 | https://en.wikipedia.org/wiki/Groundhog | Groundhog | The groundhog (Marmota monax), also known as the woodchuck, is a rodent of the family Sciuridae, belonging to the group of large ground squirrels known as marmots.
A lowland creature of North America, it is found through much of the Eastern United States, across Canada and into Alaska.
It was given its scientific name as Mus monax by Carl Linnaeus in 1758, based on a description of the animal by George Edwards, published in 1743.
The groundhog plays an important role maintaining healthy soil in woodlands and plains; as such, the species is considered a crucial habitat engineer. The groundhog is an extremely intelligent animal, forming complex social networks and kinship with its young; it is capable of understanding social behavior, communicating threats through whistling, and working cooperatively to accomplish tasks such as burrowing.
Etymology
Common names for the groundhog include chuck, wood-shock, groundpig, whistle-pig, whistler, thickwood badger, Canada marmot, monax, moonack, weenusk, red monk, land beaver and, among French Canadians in eastern Canada, siffleux. The name "thickwood badger" was given in the Northwest to distinguish the animal from the prairie badger. Monax () is an Algonquian name of the woodchuck, which means "digger" (cf. Lenape ). Young groundhogs may be called chucklings.
The etymology of the name woodchuck is unrelated to wood or any sense of chucking. It stems from an Algonquian (possibly Narragansett) name for the animal, wuchak. The similarity between the words has led to the popular tongue-twister:
How much wood would a woodchuck chuck
if a woodchuck could chuck wood?
A woodchuck would chuck all the wood he could
if a woodchuck could chuck wood!
Description
The groundhog is the largest sciurid in its geographical range, excluding its presence in British Columbia where its range may be comparable to that of its somewhat larger cousin, the hoary marmot. Adults may measure from in total length, including a tail of . Weights of adult groundhogs typically fall between .
Male groundhogs are slightly larger than females on average and, like all marmots, they are considerably heavier during autumn (when engaged in autumn hyperphagia) than when they emerge from hibernation in spring. Adult males average year-around weight , with spring to fall average weights of while females average , with spring to fall averages of . Seasonal weight changes reflect circannual deposition and use of fat. Groundhogs attain progressively higher weights each year for the first two or three years, after which weight plateaus.
Groundhogs have four incisors, which grow per week. Constant usage wears them down by about that much each week. Unlike the incisors of many other rodents, the incisors of groundhogs are white to ivory-white. Groundhogs are well-adapted for digging, with powerful, short legs and broad, long claws. The groundhog's tail is shorter than that of other Sciuridae—only about one-fourth of body length.
Distribution and habitat
The groundhog dwells in lowland habitats, unlike other marmots that live in rocky and mountainous areas. Marmota monax has a wide geographic range. The groundhog prefers open country and the edges of woodland, being rarely found far from a burrow entrance. It can typically be found in small woodlots, low-elevation forests, fields and pastures, and hedgerows. It constructs dens in well-drained soil, and most groundhogs have summer and winter dens. Human activity has increased food access and abundance, allowing M. monax to thrive.
Behavior
W.J. Schoonmaker reports that groundhogs may hide when they see, smell, or hear an observer. Marmot researcher Ken Armitage states that the social biology of the groundhog is poorly studied.
Despite their heavy-bodied appearance, groundhogs are accomplished swimmers and occasionally climb trees when escaping predators or when they want to survey their surroundings. They prefer to retreat to their burrows when threatened; if the burrow is invaded, the groundhog tenaciously defends itself with its two large incisors and front claws. Groundhogs are generally agonistic and territorial toward conspecifics and may skirmish to establish dominance.
Outside their burrow, individuals are alert when not actively feeding. It is common to see one or more nearly motionless individuals standing erect on their hind feet watching for danger. When alarmed, they use a high-pitched whistle to warn the rest of the colony, hence the name "whistle-pig". Groundhogs may squeal when fighting, seriously injured, or caught by a predator. Other vocalizations include low barks and a sound produced by grinding their teeth. David P. Barash wrote that he witnessed only two occasions of upright play-fighting among woodchucks and that the upright posture of play-fighting involves sustained physical contact between individuals that may require a degree of social tolerance virtually unknown in M. monax. Alternatively, upright play-fighting may be a part of the woodchuck's behavioral repertoire that rarely is shown because of physical spacing and/or low social tolerance.
Diet
Mostly herbivorous, groundhogs eat primarily wild grasses and other vegetation, including berries, bark, leaves, and agricultural crops, when available. In early spring, dandelion and coltsfoot are important groundhog food items. Some additional foods include sheep sorrel, timothy-grass, buttercup, persicaria, agrimony, red and black raspberries, mulberries, buckwheat, plantain, wild lettuce, alfalfa, and all varieties of clover. Groundhogs also occasionally eat small animals, such as grubs, grasshoppers, snails, and even bird eggs and baby birds, but are not as omnivorous as many other Sciuridae.
An adult groundhog can eat more than of vegetation daily. In early June, woodchucks' metabolism slows, and while their food intake decreases, their weight increases by as much as 100% as they produce fat deposits to sustain them during hibernation and late winter. Instead of storing food, groundhogs stuff themselves to survive the winter without eating. Thought not to drink water, groundhogs are reported to obtain needed liquids from the juices of edible plants, aided by their sprinkling with rain or dew.
Burrows
Groundhogs are excellent burrowers, using burrows for sleeping, rearing young, and hibernating. Groundhog burrows usually have two to five entrances, providing groundhogs their primary means of escape from predators. The volume of earth removed from groundhog burrows in one study averaged per den. The longest burrow measured in addition to two short side galleries.
Though groundhogs are the most solitary of the marmots, several individuals may occupy the same burrow.
Burrows can pose a serious threat to agricultural and residential development by damaging farm machinery and even undermining building foundations. In a June 7, 2009, Humane Society of the United States article, "How to Humanely Chuck a Woodchuck Out of Your Yard" by John Griffin, director of Humane Wildlife Services, stated you would have to have a lot of woodchucks working over a lot of years to create tunnel systems that would pose any risk to a structure.
The burrow is used for safety, retreat in bad weather, hibernating, sleeping, mating, and nursery. In addition to the nest, there is an excrement chamber. The hibernation or nest chamber is lined with dead leaves and dried grasses. The nest chamber may be about twenty inches to three feet () below ground surface. It is about wide and high. There are typically two burrow openings or holes. One is the main entrance, the other a spy hole. Description of the length of the burrow often includes side galleries. Excluding side galleries, Schoonmaker reports the longest was , and the average length of eleven dens was . W. H. Fisher investigated nine burrows, finding the deepest point down. The longest, including side galleries, was . Numbers of burrows per individual groundhog decrease with urbanization.
Bachman mentioned when young groundhogs are a few months old, they prepare for separation, digging a number of holes in the area of their early home. Some of these holes were only a few feet deep and never occupied, but the numerous burrows gave the impression that groundhogs live in communities.
Abandoned groundhog burrows benefit many other species by providing shelter. They are used by cottontail rabbits, raccoons, foxes, river otters, eastern chipmunks, and a wide variety of small mammals, snakes, and birds.
Hibernation
Groundhogs are one of the few species that enter into true hibernation, and often build a separate "winter burrow" for this purpose. This burrow is usually in a wooded or brushy area and is dug below the frost line and remains at a stable temperature well above freezing during the winter months. In most areas, groundhogs hibernate from October to March or April, but in more temperate areas, they may hibernate as little as three months. Groundhogs hibernate longer in northern latitudes than southern latitudes. To survive the winter, they are at their maximum weight shortly before entering hibernation. When the groundhog enters hibernation, there is a drop in body temperature to as low as , heart rate falls to 4–10 beats per minute and breathing rate falls to one breath every six minutes. During hibernation, they experience periods of torpor and arousal. Hibernating woodchucks lose as much as half their body weight by February. They emerge from hibernation with some remaining body fat to live on until the warmer spring weather produces abundant plant materials for food. Males emerge from hibernation before females. Groundhogs are mostly diurnal and are often active early in the morning or late afternoon.
Reproduction
Groundhogs are considered the most solitary of the marmot species. They live in aggregations, and their social organization also varies across populations. Groundhogs do not form stable, long-term pair-bonds, and during mating season male-female interactions are limited to copulation. In Ohio, adult males and females associate with each other throughout the year and often from year to year. Usually groundhogs breed in their second year, but a small proportion may breed in their first. The breeding season extends from early March to mid- or late April, after hibernation. Woodchucks are polygynous but only alpine and woodchuck marmot females have been shown to mate with multiple males. A mated pair remains in the same den throughout the 31- to 32-day gestation period. As birth of the young approaches in April or May, the male leaves the den. One litter is produced annually. Female woodchucks give birth to one to nine offspring, with most litters ranging between 3 and 5 pups. Groundhog mothers introduce their young to the wild once their fur is grown in and they can see. At this time, if at all, the father groundhog comes back to the family. By the end of August, the family breaks up; or at least, the larger number scatter, to burrow on their own.
Health and mortality
In the wild, groundhogs can live up to six years with two or three being the average life expectancy. In captivity, groundhogs reportedly live up to 14 years. Human development often leaves vacant space near secondary forests, which are indigenous to groundhogs, which ensures that groundhogs in well-developed areas are nearly free of predators other than humans (through various forms of pest control or vehicular incursion) or mid-to-large sized dogs.
Occasionally, woodchucks may suffer from parasitism, and a woodchuck may die from infestation or from bacteria transmitted by vectors. In areas of intensive agriculture and the dairying regions of the state of Wisconsin, particularly in southern areas, the woodchuck had been almost extirpated by 1950. Jackson (1961) suggested that exaggerated reports of damage done by the woodchuck led to excessive culling, substantially reducing its numbers in the state.
In some areas woodchucks are important game animals and are killed regularly for sport, food, or fur. In Kentucky, an estimated 267,500 M. monax were taken annually from 1964 to 1971. Woodchucks had protected status in the state of Wisconsin until 2017. Woodchuck numbers appear to have decreased in Illinois.
Natural predators
Wild predators of adult groundhogs in most of eastern North America include coyotes, badgers, bobcats, and foxes (largely red fox). Many of these predators are successful stealth stalkers that catch groundhogs by surprise before they can escape to their burrows; badgers likely hunt them by digging them out from their burrows. Coyotes in particular are sizable enough to overpower any groundhog with the latter being the third most significant prey species per a statewide study in Pennsylvania.
Large predators such as gray wolf and eastern cougar are likely extirpated in the east but still may hunt groundhogs on occasion in Canada. Golden eagles can also prey on adult groundhogs, but seldom occur in the same range or in the same habitats as this marmot. Likewise, great horned owls can reportedly, per Bent (1938), prey upon groundhogs but rarely do so, given the temporal differences in their behaviors.
Young groundhogs (usually those less than a couple months in age) may also be taken by the American mink, and perhaps other small mustelids, cats, timber rattlesnakes, and hawks. Red-tailed hawks can take groundhogs at least of up to the size of yearling juveniles, and northern goshawks can take them up to perhaps weak emergent-adult groundhogs in the Spring.
Beyond their large size, groundhogs have several successful anti-predator behaviors, usually retreating to the safety of their burrow which most predators will not attempt to enter, but also being ready to defend themselves with their sharp claws and large incisors. They can also scale trees to escape a threat.
Relationship with humans
Both their diet and their habit of burrowing make groundhogs serious nuisance animals around farms and gardens. They will eat many commonly grown vegetables. Extensive burrowing can undermine foundations.
Very often, the dens of groundhogs provide homes for other animals, including skunks, red foxes, and cottontail rabbits. Foxes and skunks feed upon field mice, grasshoppers, beetles, and other creatures that destroy farm crops. In aiding these animals, the groundhog indirectly helps the farmer. In addition to providing homes for itself and other animals, the groundhog aids in soil improvement by bringing subsoil to the surface. The groundhog is also a valuable game animal and is considered a difficult sport when hunted in a fair manner. In some parts of the U.S., they have been eaten.
A report in 1883 by the New Hampshire Legislative Woodchuck Committee describes the groundhog's objectionable character:
The committee concludes that, "a small bounty will prove of incalculable good; at all events, even as an experiment, it is certainly worth trying; therefore your committee would respectfully recommend that the accompanying bill be passed."
Groundhogs may be raised in captivity, but their aggressive nature can pose problems. Doug Schwartz, a zookeeper and groundhog trainer at the Staten Island Zoo, has been quoted as saying "They're known for their aggression, so you're starting from a hard place. His natural impulse is to kill 'em all and let God sort 'em out. You have to work to produce the sweet and cuddly." Groundhogs cared for in wildlife rehabilitation that survive but cannot be returned to the wild may remain with their caregivers and become educational ambassadors.
In the United States and Canada, the yearly Groundhog Day celebration on February 2 has given the groundhog recognition and popularity. The most popularly known of these groundhogs are Punxsutawney Phil, Wiarton Willie, Shubenacadie Sam, Jimmy the Groundhog, Dunkirk Dave, and Staten Island Chuck kept as part of Groundhog Day festivities in Punxsutawney, Pennsylvania; Wiarton, Ontario; Sun Prairie, Wisconsin; Dunkirk, New York; and Staten Island respectively. The 1993 comedy film Groundhog Day references several events related to Groundhog Day, and portrays both Punxsutawney Phil himself, and the annual Groundhog Day ceremony. Famous Southern groundhogs include General Beauregard Lee, based at Dauset Trails Nature Center outside Atlanta, Georgia.
Groundhogs are used in medical research on hepatitis B-induced liver cancer. A percentage of the woodchuck population is infected with the woodchuck hepatitis virus (WHV), which is similar to human hepatitis B virus. Humans cannot contract hepatitis from woodchucks with WHV, but the virus and its effects on the liver make the woodchuck the best available animal for the study of viral hepatitis in humans. The only other animal model for hepatitis B virus studies is the chimpanzee, an endangered species. Woodchucks are also used in biomedical research investigating metabolic function, obesity, energy balance, the endocrine system, reproduction, neurology, cardiovascular disease, cerebrovascular disease, and neoplastic disease. Researching the hibernation patterns of groundhogs may lead to benefits for humans, including lowering of the heart rate in complicated surgical procedures.
Groundhog burrows have revealed at least two archaeological sites, the Ufferman Site in the U.S. state of Ohio and Meadowcroft Rockshelter in Pennsylvania. Archaeologists have never excavated the Ufferman Site, but the activities of local groundhogs have revealed numerous artifacts. They favor the loose soil of the esker at the site lies, and their burrow digging has brought many objects to the surface: human and animal bones, pottery, and bits of stone. Woodchuck remains were found in the Indian mounds at Aztalan, Jefferson County, Wisconsin.
Robert Frost's poem "A Drumlin Woodchuck" uses the imagery of a groundhog dug into a small ridge as a metaphor for his emotional reticence.
| Biology and health sciences | Rodents | Animals |
167513 | https://en.wikipedia.org/wiki/Sphalerite | Sphalerite | Sphalerite is a sulfide mineral with the chemical formula . It is the most important ore of zinc. Sphalerite is found in a variety of deposit types, but it is primarily in sedimentary exhalative, Mississippi-Valley type, and volcanogenic massive sulfide deposits. It is found in association with galena, chalcopyrite, pyrite (and other sulfides), calcite, dolomite, quartz, rhodochrosite, and fluorite.
German geologist Ernst Friedrich Glocker discovered sphalerite in 1847, naming it based on the Greek word sphaleros, meaning "deceiving", due to the difficulty of identifying the mineral.
In addition to zinc, sphalerite is an ore of cadmium, gallium, germanium, and indium. Miners have been known to refer to sphalerite as zinc blende, black-jack, and ruby blende. Marmatite is an opaque black variety with a high iron content.
Crystal habit and structure
Sphalerite crystallizes in the face-centered cubic zincblende crystal structure, which named after the mineral. This structure is a member of the hextetrahedral crystal class (space group F3m). In the crystal structure, both the sulfur and the zinc or iron ions occupy the points of a face-centered cubic lattice, with the two lattices displaced from each other such that the zinc and iron are tetrahedrally coordinated to the sulfur ions, and vice versa. Minerals similar to sphalerite include those in the sphalerite group, consisting of sphalerite, colaradoite, hawleyite, metacinnabar, stilleite and tiemannite. The structure is closely related to the structure of diamond. The hexagonal polymorph of sphalerite is wurtzite, and the trigonal polymorph is matraite. Wurtzite is the higher temperature polymorph, stable at temperatures above . The lattice constant for zinc sulfide in the zinc blende crystal structure is 0.541 nm. Sphalerite has been found as a pseudomorph, taking the crystal structure of galena, tetrahedrite, barite and calcite. Sphalerite can have Spinel Law twins, where the twin axis is [111].
The chemical formula of sphalerite is ; the iron content generally increases with increasing formation temperature and can reach up to 40%. The material can be considered a ternary compound between the binary endpoints ZnS and FeS with composition ZnxFe(1-x)S, where x can range from 1 (pure ZnS) to 0.6.
All natural sphalerite contains concentrations of various impurities, which generally substitute for zinc in the cation position in the lattice; the most common cation impurities are cadmium, mercury and manganese, but gallium, germanium and indium may also be present in relatively high concentrations (hundreds to thousands of ppm). Cadmium can replace up to 1% of zinc and manganese is generally found in sphalerite with high iron abundances. Sulfur in the anion position can be substituted for by selenium and tellurium. The abundances of these impurities are controlled by the conditions under which the sphalerite formed; formation temperature, pressure, element availability and fluid composition are important controls.
Properties
Physical properties
Sphalerite possesses perfect dodecahedral cleavage, having six cleavage planes. In pure form, it is a semiconductor, but transitions to a conductor as the iron content increases. It has a hardness of 3.5 to 4 on the Mohs scale of mineral hardness.
It can be distinguished from similar minerals by its perfect cleavage, its distinctive resinous luster, and the reddish-brown streak of the darker varieties.
Optical properties
Pure zinc sulfide is a wide-bandgap semiconductor, with bandgap of about 3.54 electron volts, which makes the pure material transparent in the visible spectrum. Increasing iron content will make the material opaque, while various impurities can give the crystal a variety of colors. In thin section, sphalerite exhibits very high positive relief and appears colorless to pale yellow or brown, with no pleochroism.
The refractive index of sphalerite (as measured via sodium light, average wavelength 589.3 nm) ranges from 2.37 when it is pure ZnS to 2.50 when there is 40% iron content. Sphalerite is isotropic under cross-polarized light, however sphalerite can experience birefringence if intergrown with its polymorph wurtzite; the birefringence can increase from 0 (0% wurtzite) up to 0.022 (100% wurtzite).
Depending on the impurities, sphalerite will fluoresce under ultraviolet light. Sphalerite can be triboluminescent. Sphalerite has a characteristic triboluminescence of yellow-orange. Typically, specimens cut into end-slabs are ideal for displaying this property.
Varieties
Gemmy, colorless to pale green sphalerite from Franklin, New Jersey (see Franklin Furnace), are highly fluorescent orange and/or blue under longwave ultraviolet light and are known as cleiophane, an almost pure ZnS variety. Cleiophane contains less than 0.1% of iron in the sphalerite crystal structure. Marmatite or christophite is an opaque black variety of sphalerite and its coloring is due to high quantities of iron, which can reach up to 25%; marmatite is named after Marmato mining district in Colombia and christophite is named for the St. Christoph mine in Breitenbrunn, Saxony. Both marmatite and cleiophane are not recognized by the International Mineralogical Association (IMA). Red, orange or brownish-red sphalerite is termed ruby blende or ruby zinc, whereas dark colored sphalerite is termed black-jack.
Deposit types
Sphalerite is amongst the most common sulfide minerals, and it is found worldwide and in a variety of deposit types. The reason for the wide distribution of sphalerite is that it appears in many types of deposits; it is found in skarns, hydrothermal deposits, sedimentary beds, volcanogenic massive sulfide deposits (VMS), Mississippi-valley type deposits (MVT), granite and coal.
Sedimentary exhalitive
Approximately 50% of zinc (from sphalerite) and lead comes from Sedimentary exhalative (SEDEX) deposits, which are stratiform Pb-Zn sulfides that form at seafloor vents. The metals precipitate from hydrothermal fluids and are hosted by shales, carbonates and organic-rich siltstones in back-arc basins and failed continental rifts. The main ore minerals in SEDEX deposits are sphalerite, galena, pyrite, pyrrhotite and marcasite, with minor sulfosalts such as tetrahedrite-freibergite and boulangerite; the zinc + lead grade typically ranges between 10 and 20%. Important SEDEX mines are Red Dog in Alaska, Sullivan Mine in British Columbia, Mount Isa and Broken Hill in Australia and Mehdiabad in Iran.
Mississippi-Valley type
Similar to SEDEX, Mississippi-Valley type (MVT) deposits are also a Pb-Zn deposit which contains sphalerite. However, they only account for 15–20% of zinc and lead, are 25% smaller in tonnage than SEDEX deposits and have lower grades of 5–10% Pb + Zn. MVT deposits form from the replacement of carbonate host rocks such as dolostone and limestone by ore minerals; they are located in platforms and foreland thrust belts. Furthermore, they are stratabound, typically Phanerozoic in age and epigenetic (form after the lithification of the carbonate host rocks). The ore minerals are the same as SEDEX deposits: sphalerite, galena, pyrite, pyrrhotite and marcasite, with minor sulfosalts. Mines that contain MVT deposits include Polaris in the Canadian arctic, Mississippi River in the United States, Pine Point in Northwest Territories, and Admiral Bay in Australia.
Volcanogenic massive sulfide
Volcanogenic massive sulfide (VMS) deposits can be Cu-Zn- or Zn-Pb-Cu-rich, and accounts for 25% of Zn in reserves. There are various types of VMS deposits with a range of regional contexts and host rock compositions; a common characteristic is that they are all hosted by submarine volcanic rocks. They form from metals such as copper and zinc being transferred by hydrothermal fluids (modified seawater) which leach them from volcanic rocks in the oceanic crust; the metal-saturated fluid rises through fractures and faults to the surface, where it cools and deposits the metals as a VMS deposit. The most abundant ore minerals are pyrite, chalcopyrite, sphalerite and pyrrhotite. Mines that contain VMS deposits include Kidd Creek in Ontario, Urals in Russia, Troodos in Cyprus, and Besshi in Japan.
Localities
The top producers of sphalerite include the United States, Russia, Mexico, Germany, Australia, Canada, China, Ireland, Peru, Kazakhstan and England.
Sources of high quality crystals include:
Uses
Metal ore
Sphalerite is an important ore of zinc; around 95% of all primary zinc is extracted from sphalerite ore. However, due to its variable trace element content, sphalerite is also an important source of several other metals such as cadmium, gallium, germanium, and indium which replace zinc. The ore was originally called blende by miners (from German blind or deceiving) because it resembles galena but yields no lead.
Brass and bronze
The zinc in sphalerite is used to produce brass, an alloy of copper with 3–45% zinc. Major element alloy compositions of brass objects provide evidence that sphalerite was being used to produce brass by the Islamic as far back as the medieval ages between the 7th and 16th century CE. Sphalerite may have also been used during the cementation process of brass in Northern China during the 12th–13th century CE (Jin Dynasty). Besides brass, the zinc in sphalerite can also be used to produce certain types of bronze; bronze is dominantly copper which is alloyed with other metals such as tin, zinc, lead, nickel, iron and arsenic.
Other
Yule Marble – sphalerite is found as inclusions in yule marble, which is used as a building material for the Lincoln Memorial and Tomb of the Unknown.
Galvanized iron – zinc from sphalerite is used as a protective coating to prevent corrosion and rusting; it is used on power transmission towers, nails and automobiles.
Batteries.
Gemstone.
Gallery
| Physical sciences | Minerals | Earth science |
167518 | https://en.wikipedia.org/wiki/Greenockite | Greenockite | Greenockite, also cadmium blende or cadmium ochre (obsolete) is a rare cadmium bearing metal sulfide mineral consisting of cadmium sulfide (CdS) in crystalline form. Greenockite crystallizes in the hexagonal system. It occurs as massive encrustations and as hemimorphic six-sided pyramidal crystals which vary in color from a honey yellow through shades of red to brown. The Mohs hardness is 3 to 3.5 and the specific gravity is 4.8 to 4.9.
Greenockite belongs to the wurtzite group and is isostructural with it at high temperatures. It is also isostructural with sphalerite at low temperatures. It occurs with other sulfide minerals such as sphalerite and galena, and is the only ore mineral of cadmium. Most cadmium is recovered as a byproduct of copper, zinc, and lead mining. It is also known from the lead-zinc districts of the central United States.
It was first recognized in 1840 in Bishopton, Scotland, during the cutting of a tunnel for the Glasgow, Paisley and Greenock Railway. The mineral was named after the land owner Lord Greenock (1783–1859).
Use
Greenockite, also known as "cadmium ochre", was used as a yellow pigment prior to cadmium being recognized as a toxic element. The extracted cadmium has various industrial use, such as electrical nickel-cadmium (NiCd) rechargeable batteries, electroplating, high temp alloys, plating steel and other metals that corrode easily, and use in control rods for some nuclear reactors.
| Physical sciences | Minerals | Earth science |
167520 | https://en.wikipedia.org/wiki/Guyot | Guyot | In marine geology, a guyot (), also called a tablemount, is an isolated underwater volcanic mountain (seamount) with a flat top more than below the surface of the sea. The diameters of these flat summits can exceed . Guyots are most commonly found in the Pacific Ocean, but they have been identified in all the oceans except the Arctic Ocean. They are analogous to tables (such as mesas) on land.
History
Guyots were first recognized in 1945 by Harry Hammond Hess, who collected data using echo-sounding equipment on a ship he commanded during World War II. His data showed that some undersea mountains had flat tops. Hess called these undersea mountains "guyots", after the 19th-century geographer Arnold Henry Guyot. Hess postulated they were once volcanic islands that were beheaded by wave action, yet they are now deep under sea level. This idea was used to help bolster the theory of plate tectonics.
Formation
Guyots show evidence of having once been above the surface, with gradual subsidence through stages from fringed reefed mountain, coral atoll, and finally a flat-topped submerged mountain. Seamounts are made by extrusion of lavas piped upward in stages from sources within the Earth's mantle, usually hotspots, to vents on the seafloor. The volcanism invariably ceases after a time, and other processes dominate. When an undersea volcano grows high enough to be near or breach the ocean surface, wave action or coral reef growth tend to create a flat-topped edifice. However, all ocean crust and guyots form from hot magma or rock, which cools over time. As the lithosphere that the future guyot rides on slowly cools, it becomes denser and sinks lower into Earth's mantle, through the process of isostasy. In addition, the erosive effects of waves and currents are found mostly near the surface: the tops of guyots generally lie below this higher-erosion zone.
This is the same process that gives rise to higher seafloor topography at oceanic ridges, such as the Mid-Atlantic Ridge in the Atlantic Ocean, and deeper ocean at abyssal plains and oceanic trenches, such as the Mariana Trench. Thus, the island or shoal that will eventually become a guyot slowly subsides over millions of years. In the right climatic regions, coral growth can sometimes keep pace with the subsidence, resulting in coral atoll formation, but eventually the corals dip too deep to grow and the island becomes a guyot. The greater the amount of time that passes, the deeper the guyots become.
Seamounts provide data on movements of tectonic plates on which they ride, and on the rheology of the underlying lithosphere. The trend of a seamount chain traces the direction of motion of the lithospheric plate over a more or less fixed heat source in the underlying asthenosphere, the part of the Earth's mantle beneath the lithosphere. There are thought to be up to an estimated 50,000 seamounts in the Pacific basin. The Hawaiian–Emperor seamount chain is an excellent example of an entire volcanic chain undergoing this process, from active volcanism, to coral reef growth, to atoll formation, to subsidence of the islands and becoming guyots.
Characteristics
The steepness gradient of most guyots is about 20 degrees. To technically be considered a guyot or tablemount, they must stand at least tall. One guyot in particular, the Great Meteor Tablemount in the Northeast Atlantic Ocean, stands at more than high, with a diameter of . However, there are many undersea mounts that can range from just less than to around . Very large oceanic volcanic constructions, hundreds of kilometres across, are called oceanic plateaus. Guyots have a mean area of , which is much larger than typical seamounts, which have a mean area of .
There are 283 known guyots in the world's oceans, with the North Pacific having 119, the South Pacific 77, the South Atlantic 43, the Indian Ocean 28, the North Atlantic eight, the Southern Ocean six, and the Mediterranean Sea two; there are none known in the Arctic Ocean, though one is found along the Fram Strait off northeastern Greenland. Guyots are also associated with specific lifeforms and varying amounts of organic matter. Local increases in chlorophyll a, enhanced carbon incorporation rates and changes in phytoplankton species composition are associated with guyots and other seamounts.
| Physical sciences | Oceanic and coastal landforms | Earth science |
167544 | https://en.wikipedia.org/wiki/Transcription%20%28biology%29 | Transcription (biology) | Transcription is the process of copying a segment of DNA into RNA for the purpose of gene expression. Some segments of DNA are transcribed into RNA molecules that can encode proteins, called messenger RNA (mRNA). Other segments of DNA are transcribed into RNA molecules called non-coding RNAs (ncRNAs).
Both DNA and RNA are nucleic acids, which use base pairs of nucleotides as a complementary language. During transcription, a DNA sequence is read by an RNA polymerase, which produces a complementary, antiparallel RNA strand called a primary transcript.
In virology, the term transcription is used when referring to mRNA synthesis from a viral RNA molecule. The genome of many RNA viruses is composed of negative-sense RNA which acts as a template for positive sense viral messenger RNA - a necessary step in the synthesis of viral proteins needed for viral replication. This process is catalyzed by a viral RNA dependent RNA polymerase.
Background
A DNA transcription unit encoding for a protein may contain both a coding sequence, which will be translated into the protein, and regulatory sequences, which direct and regulate the synthesis of that protein. The regulatory sequence before (upstream from) the coding sequence is called the five prime untranslated regions (5'UTR); the sequence after (downstream from) the coding sequence is called the three prime untranslated regions (3'UTR).
As opposed to DNA replication, transcription results in an RNA complement that includes the nucleotide uracil (U) in all instances where thymine (T) would have occurred in a DNA complement.
Only one of the two DNA strands serves as a template for transcription. The antisense strand of DNA is read by RNA polymerase from the 3' end to the 5' end during transcription (3' → 5'). The complementary RNA is created in the opposite direction, in the 5' → 3' direction, matching the sequence of the sense strand except switching uracil for thymine. This directionality is because RNA polymerase can only add nucleotides to the 3' end of the growing mRNA chain. This use of only the 3' → 5' DNA strand eliminates the need for the Okazaki fragments that are seen in DNA replication. This also removes the need for an RNA primer to initiate RNA synthesis, as is the case in DNA replication.
The non-template (sense) strand of DNA is called the coding strand, because its sequence is the same as the newly created RNA transcript (except for the substitution of uracil for thymine). This is the strand that is used by convention when presenting a DNA sequence.
Transcription has some proofreading mechanisms, but they are fewer and less effective than the controls for copying DNA. As a result, transcription has a lower copying fidelity than DNA replication.
Major steps
Transcription is divided into initiation, promoter escape, elongation, and termination.
Setting up for transcription
Enhancers, transcription factors, Mediator complex, and DNA loops in mammalian transcription
Setting up for transcription in mammals is regulated by many cis-regulatory elements, including core promoter and promoter-proximal elements that are located near the transcription start sites of genes. Core promoters combined with general transcription factors are sufficient to direct transcription initiation, but generally have low basal activity. Other important cis-regulatory modules are localized in DNA regions that are distant from the transcription start sites. These include enhancers, silencers, insulators and tethering elements. Among this constellation of elements, enhancers and their associated transcription factors have a leading role in the initiation of gene transcription. An enhancer localized in a DNA region distant from the promoter of a gene can have a very large effect on gene transcription, with some genes undergoing up to 100-fold increased transcription due to an activated enhancer.
Enhancers are regions of the genome that are major gene-regulatory elements. Enhancers control cell-type-specific gene transcription programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. While there are hundreds of thousands of enhancer DNA regions, for a particular type of tissue only specific enhancers are brought into proximity with the promoters that they regulate. In a study of brain cortical neurons, 24,937 loops were found, bringing enhancers to their target promoters. Multiple enhancers, each often at tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and can coordinate with each other to control transcription of their common target gene.
The schematic illustration in this section shows an enhancer looping around to come into close physical proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1), with one member of the dimer anchored to its binding motif on the enhancer and the other member anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function specific transcription factors (there are about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer and a small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern level of transcription of the target gene. Mediator (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter.
Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two enhancer RNAs (eRNAs) as illustrated in the Figure. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating transcription of messenger RNA from its target gene.
CpG island methylation and demethylation
Transcription regulation at about 60% of promoters is also controlled by methylation of cytosines within CpG dinucleotides (where 5' cytosine is followed by 3' guanine or CpG sites). 5-methylcytosine (5-mC) is a methylated form of the DNA base cytosine (see Figure). 5-mC is an epigenetic marker found predominantly within CpG sites. About 28 million CpG dinucleotides occur in the human genome. In most tissues of mammals, on average, 70% to 80% of CpG cytosines are methylated (forming 5-methylCpG or 5-mCpG). However, unmethylated cytosines within 5'cytosine-guanine 3' sequences often occur in groups, called CpG islands, at active promoters. About 60% of promoter sequences have a CpG island while only about 6% of enhancer sequences have a CpG island. CpG islands constitute regulatory sequences, since if CpG islands are methylated in the promoter of a gene this can reduce or silence gene transcription.
DNA methylation regulates gene transcription through interaction with methyl binding domain (MBD) proteins, such as MeCP2, MBD1 and MBD2. These MBD proteins bind most strongly to highly methylated CpG islands. These MBD proteins have both a methyl-CpG-binding domain as well as a transcription repression domain. They bind to methylated DNA and guide or direct protein complexes with chromatin remodeling and/or histone modifying activity to methylated CpG islands. MBD proteins generally repress local chromatin such as by catalyzing the introduction of repressive histone marks, or creating an overall repressive chromatin environment through nucleosome remodeling and chromatin reorganization.
As noted in the previous section, transcription factors are proteins that bind to specific DNA sequences in order to regulate the expression of a gene. The binding sequence for a transcription factor in DNA is usually about 10 or 11 nucleotides long. As summarized in 2009, Vaquerizas et al. indicated there are approximately 1,400 different transcription factors encoded in the human genome by genes that constitute about 6% of all human protein encoding genes. About 94% of transcription factor binding sites (TFBSs) that are associated with signal-responsive genes occur in enhancers while only about 6% of such TFBSs occur in promoters.
EGR1 protein is a particular transcription factor that is important for regulation of methylation of CpG islands. An EGR1 transcription factor binding site is frequently located in enhancer or promoter sequences. There are about 12,000 binding sites for EGR1 in the mammalian genome and about half of EGR1 binding sites are located in promoters and half in enhancers. The binding of EGR1 to its target DNA binding site is insensitive to cytosine methylation in the DNA.
While only small amounts of EGR1 transcription factor protein are detectable in cells that are un-stimulated, translation of the EGR1 gene into protein at one hour after stimulation is drastically elevated. Production of EGR1 transcription factor proteins, in various types of cells, can be stimulated by growth factors, neurotransmitters, hormones, stress and injury. In the brain, when neurons are activated, EGR1 proteins are up-regulated and they bind to (recruit) the pre-existing TET1 enzymes that are produced in high amounts in neurons. TET enzymes can catalyse demethylation of 5-methylcytosine. When EGR1 transcription factors bring TET1 enzymes to EGR1 binding sites in promoters, the TET enzymes can demethylate the methylated CpG islands at those promoters. Upon demethylation, these promoters can then initiate transcription of their target genes. Hundreds of genes in neurons are differentially expressed after neuron activation through EGR1 recruitment of TET1 to methylated regulatory sequences in their promoters.
The methylation of promoters is also altered in response to signals. The three mammalian DNA methyltransferasess (DNMT1, DNMT3A, and DNMT3B) catalyze the addition of methyl groups to cytosines in DNA. While DNMT1 is a maintenance methyltransferase, DNMT3A and DNMT3B can carry out new methylations. There are also two splice protein isoforms produced from the DNMT3A gene: DNA methyltransferase proteins DNMT3A1 and DNMT3A2.
The splice isoform DNMT3A2 behaves like the product of a classical immediate-early gene and, for instance, it is robustly and transiently produced after neuronal activation. Where the DNA methyltransferase isoform DNMT3A2 binds and adds methyl groups to cytosines appears to be determined by histone post translational modifications.
On the other hand, neural activation causes degradation of DNMT3A1 accompanied by reduced methylation of at least one evaluated targeted promoter.
Initiation
Transcription begins with the RNA polymerase and one or more general transcription factors binding to a DNA promoter sequence to form an RNA polymerase-promoter closed complex. In the closed complex, the promoter DNA is still fully double-stranded.
RNA polymerase, assisted by one or more general transcription factors, then unwinds approximately 14 base pairs of DNA to form an RNA polymerase-promoter open complex. In the open complex, the promoter DNA is partly unwound and single-stranded. The exposed, single-stranded DNA is referred to as the "transcription bubble".
RNA polymerase, assisted by one or more general transcription factors, then selects a transcription start site in the transcription bubble, binds to an initiating NTP and an extending NTP (or a short RNA primer and an extending NTP) complementary to the transcription start site sequence, and catalyzes bond formation to yield an initial RNA product.
In bacteria, RNA polymerase holoenzyme consists of five subunits: 2 α subunits, 1 β subunit, 1 β' subunit, and 1 ω subunit. In bacteria, there is one general RNA transcription factor known as a sigma factor. RNA polymerase core enzyme binds to the bacterial general transcription (sigma) factor to form RNA polymerase holoenzyme and then binds to a promoter.
(RNA polymerase is called a holoenzyme when sigma subunit is attached to the core enzyme which is consist of 2 α subunits, 1 β subunit, 1 β' subunit only). Unlike eukaryotes, the initiating nucleotide of nascent bacterial mRNA is not capped with a modified guanine nucleotide. The initiating nucleotide of bacterial transcripts bears a 5′ triphosphate (5′-PPP), which can be used for genome-wide mapping of transcription initiation sites.
In archaea and eukaryotes, RNA polymerase contains subunits homologous to each of the five RNA polymerase subunits in bacteria and also contains additional subunits. In archaea and eukaryotes, the functions of the bacterial general transcription factor sigma are performed by multiple general transcription factors that work together. In archaea, there are three general transcription factors: TBP, TFB, and TFE. In eukaryotes, in RNA polymerase II-dependent transcription, there are six general transcription factors: TFIIA, TFIIB (an ortholog of archaeal TFB), TFIID (a multisubunit factor in which the key subunit, TBP, is an ortholog of archaeal TBP), TFIIE (an ortholog of archaeal TFE), TFIIF, and TFIIH. The TFIID is the first component to bind to DNA due to binding of TBP, while TFIIH is the last component to be recruited. In archaea and eukaryotes, the RNA polymerase-promoter closed complex is usually referred to as the "preinitiation complex".
Transcription initiation is regulated by additional proteins, known as activators and repressors, and, in some cases, associated coactivators or corepressors, which modulate formation and function of the transcription initiation complex.
Promoter escape
After the first bond is synthesized, the RNA polymerase must escape the promoter. During this time there is a tendency to release the RNA transcript and produce truncated transcripts. This is called abortive initiation, and is common for both eukaryotes and prokaryotes. Abortive initiation continues to occur until an RNA product of a threshold length of approximately 10 nucleotides is synthesized, at which point promoter escape occurs and a transcription elongation complex is formed.
Mechanistically, promoter escape occurs through DNA scrunching, providing the energy needed to break interactions between RNA polymerase holoenzyme and the promoter.
In bacteria, it was historically thought that the sigma factor is definitely released after promoter clearance occurs. This theory had been known as the obligate release model. However, later data showed that upon and following promoter clearance, the sigma factor is released according to a stochastic model known as the stochastic release model.
In eukaryotes, at an RNA polymerase II-dependent promoter, upon promoter clearance, TFIIH phosphorylates serine 5 on the carboxy terminal domain of RNA polymerase II, leading to the recruitment of capping enzyme (CE). The exact mechanism of how CE induces promoter clearance in eukaryotes is not yet known.
Elongation
One strand of the DNA, the template strand (or noncoding strand), is used as a template for RNA synthesis. As transcription proceeds, RNA polymerase traverses the template strand and uses base pairing complementarity with the DNA template to create an RNA copy (which elongates during the traversal). Although RNA polymerase traverses the template strand from 3' → 5', the coding (non-template) strand and newly formed RNA can also be used as reference points, so transcription can be described as occurring 5' → 3'. This produces an RNA molecule from 5' → 3', an exact copy of the coding strand (except that thymines are replaced with uracils, and the nucleotides are composed of a ribose (5-carbon) sugar whereas DNA has deoxyribose (one fewer oxygen atom) in its sugar-phosphate backbone).
mRNA transcription can involve multiple RNA polymerases on a single DNA template and multiple rounds of transcription (amplification of particular mRNA), so many mRNA molecules can be rapidly produced from a single copy of a gene. The characteristic elongation rates in prokaryotes and eukaryotes are about 10–100 nts/sec. In eukaryotes, however, nucleosomes act as major barriers to transcribing polymerases during transcription elongation. In these organisms, the pausing induced by nucleosomes can be regulated by transcription elongation factors such as TFIIS.
Elongation also involves a proofreading mechanism that can replace incorrectly incorporated bases. In eukaryotes, this may correspond with short pauses during transcription that allow appropriate RNA editing factors to bind. These pauses may be intrinsic to the RNA polymerase or due to chromatin structure.
Double-strand breaks in actively transcribed regions of DNA are repaired by homologous recombination during the S and G2 phases of the cell cycle. Since transcription enhances the accessibility of DNA to exogenous chemicals and internal metabolites that can cause recombinogenic lesions, homologous recombination of a particular DNA sequence may be strongly stimulated by transcription.
Termination
Bacteria use two different strategies for transcription termination – Rho-independent termination and Rho-dependent termination. In Rho-independent transcription termination, RNA transcription stops when the newly synthesized RNA molecule forms a G-C-rich hairpin loop followed by a run of Us. When the hairpin forms, the mechanical stress breaks the weak rU-dA bonds, now filling the DNA–RNA hybrid. This pulls the poly-U transcript out of the active site of the RNA polymerase, terminating transcription. In Rho-dependent termination, Rho, a protein factor, destabilizes the interaction between the template and the mRNA, thus releasing the newly synthesized mRNA from the elongation complex.
Transcription termination in eukaryotes is less well understood than in bacteria, but involves cleavage of the new transcript followed by template-independent addition of adenines at its new 3' end, in a process called polyadenylation.
Beyond termination by a terminator sequences (which is a part of a gene), transcription may also need to be terminated when it encounters conditions such as DNA damage or an active replication fork. In bacteria, the Mfd ATPase can remove a RNA polymerase stalled at a lesion by prying open its clamp. It also recruits nucleotide excision repair machinery to repair the lesion. Mfd is proposed to also resolve conflicts between DNA replication and transcription. In eukayrotes, ATPase TTF2 helps to suppress the action of RNAP I and II during mitosis, preventing errors in chromosomal segregation. In archaea, the Eta ATPase is proposed to play a similar role.
Transcription increases susceptibility to DNA damage
Genome damage occurs with a high frequency, estimated to range between tens and hundreds of thousands of DNA damages arising in each cell every day. The process of transcription is a major source of DNA damage, due to the formation of single-strand DNA intermediates that are vulnerable to damage. The regulation of transcription by processes using base excision repair and/or topoisomerases to cut and remodel the genome also increases the vulnerability of DNA to damage.
Role of RNA polymerase in post-transcriptional changes in RNA
RNA polymerase plays a very crucial role in all steps including post-transcriptional changes in RNA.
As shown in the image in the right it is evident that the CTD (C Terminal Domain) is a tail that changes its shape; this tail will be used as a carrier of splicing, capping and polyadenylation, as shown in the image on the left.
Inhibitors
Transcription inhibitors can be used as antibiotics against, for example, pathogenic bacteria (antibacterials) and fungi (antifungals). An example of such an antibacterial is rifampicin, which inhibits bacterial transcription of DNA into mRNA by inhibiting DNA-dependent RNA polymerase by binding its beta-subunit, while 8-hydroxyquinoline is an antifungal transcription inhibitor. The effects of histone methylation may also work to inhibit the action of transcription. Potent, bioactive natural products like triptolide that inhibit mammalian transcription via inhibition of the XPB subunit of the general transcription factor TFIIH has been recently reported as a glucose conjugate for targeting hypoxic cancer cells with increased glucose transporter production.
Endogenous inhibitors
In vertebrates, the majority of gene promoters contain a CpG island with numerous CpG sites. When many of a gene's promoter CpG sites are methylated the gene becomes inhibited (silenced). Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. However, transcriptional inhibition (silencing) may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally inhibited by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered production of microRNAs. In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-produced microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers).
Transcription factories
Active transcription units are clustered in the nucleus, in discrete sites called transcription factories or euchromatin. Such sites can be visualized by allowing engaged polymerases to extend their transcripts in tagged precursors (Br-UTP or Br-U) and immuno-labeling the tagged nascent RNA. Transcription factories can also be localized using fluorescence in situ hybridization or marked by antibodies directed against polymerases. There are ~10,000 factories in the nucleoplasm of a HeLa cell, among which are ~8,000 polymerase II factories and ~2,000 polymerase III factories. Each polymerase II factory contains ~8 polymerases. As most active transcription units are associated with only one polymerase, each factory usually contains ~8 different transcription units. These units might be associated through promoters and/or enhancers, with loops forming a "cloud" around the factor.
History
A molecule that allows the genetic material to be realized as a protein was first hypothesized by François Jacob and Jacques Monod. Severo Ochoa won a Nobel Prize in Physiology or Medicine in 1959 for developing a process for synthesizing RNA in vitro with polynucleotide phosphorylase, which was useful for cracking the genetic code. RNA synthesis by RNA polymerase was established in vitro by several laboratories by 1965; however, the RNA synthesized by these enzymes had properties that suggested the existence of an additional factor needed to terminate transcription correctly.
Roger D. Kornberg won the 2006 Nobel Prize in Chemistry "for his studies of the molecular basis of eukaryotic transcription".
Measuring and detecting
Transcription can be measured and detected in a variety of ways:
G-Less Cassette transcription assay: measures promoter strength
Run-off transcription assay: identifies transcription start sites (TSS)
Nuclear run-on assay: measures the relative abundance of newly formed transcripts
KAS-seq: measures single-stranded DNA generated by RNA polymerases; can work with 1,000 cells.
RNase protection assay and ChIP-Chip of RNAP: detect active transcription sites
RT-PCR: measures the absolute abundance of total or nuclear RNA levels, which may however differ from transcription rates
DNA microarrays: measures the relative abundance of the global total or nuclear RNA levels; however, these may differ from transcription rates
In situ hybridization: detects the presence of a transcript
MS2 tagging: by incorporating RNA stem loops, such as MS2, into a gene, these become incorporated into newly synthesized RNA. The stem loops can then be detected using a fusion of GFP and the MS2 coat protein, which has a high affinity, sequence-specific interaction with the MS2 stem loops. The recruitment of GFP to the site of transcription is visualized as a single fluorescent spot. This new approach has revealed that transcription occurs in discontinuous bursts, or pulses (see Transcriptional bursting). With the notable exception of in situ techniques, most other methods provide cell population averages, and are not capable of detecting this fundamental property of genes.
Northern blot: the traditional method, and until the advent of RNA-Seq, the most quantitative
RNA-Seq: applies next-generation sequencing techniques to sequence whole transcriptomes, which allows the measurement of relative abundance of RNA, as well as the detection of additional variations such as fusion genes, post-transcriptional edits and novel splice sites
Single cell RNA-Seq: amplifies and reads partial transcriptomes from isolated cells, allowing for detailed analyses of RNA in tissues, embryos, and cancers
Reverse transcription
Some viruses (such as HIV, the cause of AIDS), have the ability to transcribe RNA into DNA. HIV has an RNA genome that is reverse transcribed into DNA. The resulting DNA can be merged with the DNA genome of the host cell. The main enzyme responsible for synthesis of DNA from an RNA template is called reverse transcriptase.
In the case of HIV, reverse transcriptase is responsible for synthesizing a complementary DNA strand (cDNA) to the viral RNA genome. The enzyme ribonuclease H then digests the RNA strand, and reverse transcriptase synthesises a complementary strand of DNA to form a double helix DNA structure (cDNA). The cDNA is integrated into the host cell's genome by the enzyme integrase, which causes the host cell to generate viral proteins that reassemble into new viral particles. In HIV, subsequent to this, the host cell undergoes programmed cell death, or apoptosis, of T cells. However, in other retroviruses, the host cell remains intact as the virus buds out of the cell.
Some eukaryotic cells contain an enzyme with reverse transcription activity called telomerase. Telomerase carries an RNA template from which it synthesizes a telomere, a repeating sequence of DNA, to the end of linear chromosomes. It is important because every time a linear chromosome is duplicated, it is shortened. With the telomere at the ends of chromosomes, the shortening eliminates some of the non-essential, repeated sequence, rather than the protein-encoding DNA sequence farther away from the chromosome end.
Telomerase is often activated in cancer cells to enable cancer cells to duplicate their genomes indefinitely without losing important protein-coding DNA sequence. Activation of telomerase could be part of the process that allows cancer cells to become immortal. The immortalizing factor of cancer via telomere lengthening due to telomerase has been proven to occur in 90% of all carcinogenic tumors in vivo with the remaining 10% using an alternative telomere maintenance route called ALT or Alternative Lengthening of Telomeres.
| Biology and health sciences | Cell processes | null |
167647 | https://en.wikipedia.org/wiki/Seamount | Seamount | A seamount is a large submarine landform that rises from the ocean floor without reaching the water surface (sea level), and thus is not an island, islet, or cliff-rock. Seamounts are typically formed from extinct volcanoes that rise abruptly and are usually found rising from the seafloor to in height. They are defined by oceanographers as independent features that rise to at least above the seafloor, characteristically of conical form. The peaks are often found hundreds to thousands of meters below the surface, and are therefore considered to be within the deep sea. During their evolution over geologic time, the largest seamounts may reach the sea surface where wave action erodes the summit to form a flat surface. After they have subsided and sunk below the sea surface, such flat-top seamounts are called "guyots" or "tablemounts".
Earth's oceans contain more than 14,500 identified seamounts, of which 9,951 seamounts and 283 guyots, covering a total area of , have been mapped but only a few have been studied in detail by scientists. Seamounts and guyots are most abundant in the North Pacific Ocean, and follow a distinctive evolutionary pattern of eruption, build-up, subsidence and erosion. In recent years, several active seamounts have been observed, for example Kamaʻehuakanaloa (formerly Lōʻihi) in the Hawaiian Islands.
Because of their abundance, seamounts are one of the most common marine ecosystems in the world. Interactions between seamounts and underwater currents, as well as their elevated position in the water, attract plankton, corals, fish, and marine mammals alike. Their aggregational effect has been noted by the commercial fishing industry, and many seamounts support extensive fisheries. There are ongoing concerns on the negative impact of fishing on seamount ecosystems, and well-documented cases of stock decline, for example with the orange roughy (Hoplostethus atlanticus). 95% of ecological damage is done by bottom trawling, which scrapes whole ecosystems off seamounts.
Because of their large numbers, many seamounts remain to be properly studied, and even mapped. Bathymetry and satellite altimetry are two technologies working to close the gap. There have been instances where naval vessels have collided with uncharted seamounts; for example, Muirfield Seamount is named after the ship that struck it in 1973. However, the greatest danger from seamounts are flank collapses; as they get older, extrusions seeping in the seamounts put pressure on their sides, causing landslides that have the potential to generate massive tsunamis.
Geography
Seamounts can be found in every ocean basin in the world, distributed extremely widely both in space and in age. A seamount is technically defined as an isolated rise in elevation of or more from the surrounding seafloor, and with a limited summit area, of conical form. There are more than 14,500 seamounts. In addition to seamounts, there are more than 80,000 small knolls, ridges and hills less than 1,000 m in height in the world's oceans.
Most seamounts are volcanic in origin, and thus tend to be found on oceanic crust near mid-ocean ridges, mantle plumes, and island arcs. Overall, seamount and guyot coverage is greatest as a proportion of seafloor area in the North Pacific Ocean, equal to 4.39% of that ocean region. The Arctic Ocean has only 16 seamounts and no guyots, and the Mediterranean and Black seas together have only 23 seamounts and 2 guyots. The 9,951 seamounts which have been mapped cover an area of . Seamounts have an average area of , with the smallest seamounts found in the Arctic Ocean and the Mediterranean and Black Seas; whilst the largest mean seamount size, , occurs in the Indian Ocean. The largest seamount has an area of and it occurs in the North Pacific. Guyots cover a total area of and have an average area of , more than twice the average size of seamounts. Nearly 50% of guyot area and 42% of the number of guyots occur in the North Pacific Ocean, covering . The largest three guyots are all in the North Pacific: the Kuko Guyot (estimated ), Suiko Guyot (estimated ) and the Pallada Guyot (estimated ).
Grouping
Seamounts are often found in groupings or submerged archipelagos, a classic example being the Emperor Seamounts, an extension of the Hawaiian Islands. Formed millions of years ago by volcanism, they have since subsided far below sea level. This long chain of islands and seamounts extends thousands of kilometers northwest from the island of Hawaii.
There are more seamounts in the Pacific Ocean than in the Atlantic, and their distribution can be described as comprising several elongate chains of seamounts superimposed on a more or less random background distribution. Seamount chains occur in all three major ocean basins, with the Pacific having the most number and most extensive seamount chains. These include the Hawaiian (Emperor), Mariana, Gilbert, Tuomotu and Austral Seamounts (and island groups) in the north Pacific and the Louisville and Sala y Gomez ridges in the southern Pacific Ocean. In the North Atlantic Ocean, the New England Seamounts extend from the eastern coast of the United States to the mid-ocean ridge. Craig and Sandwell noted that clusters of larger Atlantic seamounts tend to be associated with other evidence of hotspot activity, such as on the Walvis Ridge, Vitória-Trindade Ridge, Bermuda Islands and Cape Verde Islands. The mid-Atlantic ridge and spreading ridges in the Indian Ocean are also associated with abundant seamounts. Otherwise, seamounts tend not to form distinctive chains in the Indian and Southern Oceans, but rather their distribution appears to be more or less random.
Isolated seamounts and those without clear volcanic origins are less common; examples include Bollons Seamount, Eratosthenes Seamount, Axial Seamount and Gorringe Ridge.
If all known seamounts were collected into one area, they would make a landform the size of Europe. Their overall abundance makes them one of the most common, and least understood, marine structures and biomes on Earth, a sort of exploratory frontier.
Geology
Geochemistry and evolution
Most seamounts are built by one of two volcanic processes, although some, such as the Christmas Island Seamount Province near Australia, are more enigmatic. Volcanoes near plate boundaries and mid-ocean ridges are built by decompression melting of rock in the upper mantle. The lower density magma rises through the crust to the surface. Volcanoes formed near or above subducting zones are created because the subducting tectonic plate adds volatiles to the overriding plate that lowers its melting point. Which of these two process involved in the formation of a seamount has a profound effect on its eruptive materials. Lava flows from mid-ocean ridge and plate boundary seamounts are mostly basaltic (both tholeiitic and alkalic), whereas flows from subducting ridge volcanoes are mostly calc-alkaline lavas. Compared to mid-ocean ridge seamounts, subduction zone seamounts generally have more sodium, alkali, and volatile abundances, and less magnesium, resulting in more explosive, viscous eruptions.
All volcanic seamounts follow a particular pattern of growth, activity, subsidence and eventual extinction. The first stage of a seamount's evolution is its early activity, building its flanks and core up from the sea floor. This is followed by a period of intense volcanism, during which the new volcano erupts almost all (e.g. 98%) of its total magmatic volume. The seamount may even grow above sea level to become an oceanic island (for example, the 2009 eruption of Hunga Tonga). After a period of explosive activity near the ocean surface, the eruptions slowly die away. With eruptions becoming infrequent and the seamount losing its ability to maintain itself, the volcano starts to erode. After finally becoming extinct (possibly after a brief rejuvenated period), they are ground back down by the waves. Seamounts are built in a far more dynamic oceanic setting than their land counterparts, resulting in horizontal subsidence as the seamount moves with the tectonic plate towards a subduction zone. Here it is subducted under the plate margin and ultimately destroyed, but it may leave evidence of its passage by carving an indentation into the opposing wall of the subduction trench. The majority of seamounts have already completed their eruptive cycle, so access to early flows by researchers is limited by late volcanic activity.
Ocean-ridge volcanoes in particular have been observed to follow a certain pattern in terms of eruptive activity, first observed with Hawaiian seamounts but now shown to be the process followed by all seamounts of the ocean-ridge type. During the first stage the volcano erupts basalt of various types, caused by various degrees of mantle melting. In the second, most active stage of its life, ocean-ridge volcanoes erupt tholeiitic to mildly alkalic basalt as a result of a larger area melting in the mantle. This is finally capped by alkalic flows late in its eruptive history, as the link between the seamount and its source of volcanism is cut by crustal movement. Some seamounts also experience a brief "rejuvenated" period after a hiatus of 1.5 to 10 million years, the flows of which are highly alkalic and produce many xenoliths.
In recent years, geologists have confirmed that a number of seamounts are active undersea volcanoes; two examples are Kamaʻehuakanaloa (formerly Lo‘ihi) in the Hawaiian Islands and Vailulu'u in the Manu'a Group (Samoa).
Lava types
The most apparent lava flows at a seamount are the eruptive flows that cover their flanks, however igneous intrusions, in the forms of dikes and sills, are also an important part of seamount growth. The most common type of flow is pillow lava, named so after its distinctive shape. Less common are sheet flows, which are glassy and marginal, and indicative of larger-scale flows. Volcaniclastic sedimentary rocks dominate shallow-water seamounts. They are the products of the explosive activity of seamounts that are near the water's surface, and can also form from mechanical wear of existing volcanic rock.
Structure
Seamounts can form in a wide variety of tectonic settings, resulting in a very diverse structural bank. Seamounts come in a wide variety of structural shapes, from conical to flat-topped to complexly shaped. Some are built very large and very low, such as Koko Guyot and Detroit Seamount; others are built more steeply, such as Kamaʻehuakanaloa Seamount and Bowie Seamount. Some seamounts also have a carbonate or sediment cap.
Many seamounts show signs of intrusive activity, which is likely to lead to inflation, steepening of volcanic slopes, and ultimately, flank collapse. There are also several sub-classes of seamounts. The first are guyots, seamounts with a flat top. These tops must be or more below the surface of the sea; the diameters of these flat summits can be over . Knolls are isolated elevation spikes measuring less than . Lastly, pinnacles are small pillar-like seamounts.
Ecology
Ecological role of seamounts
Seamounts are exceptionally important to their biome ecologically, but their role in their environment is poorly understood. Because they project out above the surrounding sea floor, they disturb standard water flow, causing eddies and associated hydrological phenomena that ultimately result in water movement in an otherwise still ocean bottom. Currents have been measured at up to 0.9 knots, or 48 centimeters per second. Because of this upwelling seamounts often carry above-average plankton populations, seamounts are thus centers where the fish that feed on them aggregate, in turn falling prey to further predation, making seamounts important biological hotspots.
Seamounts provide habitats and spawning grounds for these larger animals, including numerous fish. Some species, including black oreo (Allocyttus niger) and blackstripe cardinalfish (Apogon nigrofasciatus), have been shown to occur more often on seamounts than anywhere else on the ocean floor. Marine mammals, sharks, tuna, and cephalopods all congregate over seamounts to feed, as well as some species of seabirds when the features are particularly shallow.
Seamounts often project upwards into shallower zones more hospitable to sea life, providing habitats for marine species that are not found on or around the surrounding deeper ocean bottom. Because seamounts are isolated from each other they form "undersea islands" creating the same biogeographical interest. As they are formed from volcanic rock, the substrate is much harder than the surrounding sedimentary deep sea floor. This causes a different type of fauna to exist than on the seafloor, and leads to a theoretically higher degree of endemism. However, recent research especially centered at Davidson Seamount suggests that seamounts may not be especially endemic, and discussions are ongoing on the effect of seamounts on endemicity. They have, however, been confidently shown to provide a habitat to species that have difficulty surviving elsewhere.
The volcanic rocks on the slopes of seamounts are heavily populated by suspension feeders, particularly corals, which capitalize on the strong currents around the seamount to supply them with food. These coral are therefore host to numerous other organisms in a commensal relationship, for example brittle stars, who climb the coral to get themselves off the seafloor, helping them to catch food particles, or small zooplankton, as they drift by. This is in sharp contrast with the typical deep-sea habitat, where deposit-feeding animals rely on food they get off the ground. In tropical zones extensive coral growth results in the formation of coral atolls late in the seamount's life.
In addition soft sediments tend to accumulate on seamounts, which are typically populated by polychaetes (annelid marine worms) oligochaetes (microdrile worms), and gastropod mollusks (sea slugs). Xenophyophores have also been found. They tend to gather small particulates and thus form beds, which alters sediment deposition and creates a habitat for smaller animals. Many seamounts also have hydrothermal vent communities, for example Suiyo and Kamaʻehuakanaloa seamounts. This is helped by geochemical exchange between the seamounts and the ocean water.
Seamounts may thus be vital stopping points for some migratory animals, specifically whales. Some recent research indicates whales may use such features as navigational aids throughout their migration. For a long time it has been surmised that many pelagic animals visit seamounts as well, to gather food, but proof of this aggregating effect has been lacking. The first demonstration of this conjecture was published in 2008.
Fishing
The effect that seamounts have on fish populations has not gone unnoticed by the commercial fishing industry. Seamounts were first extensively fished in the second half of the 20th century, due to poor management practices and increased fishing pressure seriously depleting stock numbers on the typical fishing ground, the continental shelf. Seamounts have been the site of targeted fishing since that time.
Nearly 80 species of fish and shellfish are commercially harvested from seamounts, including spiny lobster (Palinuridae), mackerel (Scombridae and others), red king crab (Paralithodes camtschaticus), red snapper (Lutjanus campechanus), tuna (Scombridae), Orange roughy (Hoplostethus atlanticus), and perch (Percidae).
Conservation
The ecological conservation of seamounts is hurt by the simple lack of information available. Seamounts are very poorly studied, with only 350 of the estimated 100,000 seamounts in the world having received sampling, and fewer than 100 in depth. Much of this lack of information can be attributed to a lack of technology, and to the daunting task of reaching these underwater structures; the technology to fully explore them has only been around the last few decades. Before consistent conservation efforts can begin, the seamounts of the world must first be mapped, a task that is still in progress.
Overfishing is a serious threat to seamount ecological welfare. There are several well-documented cases of fishery exploitation, for example the orange roughy (Hoplostethus atlanticus) off the coasts of Australia and New Zealand and the pelagic armorhead (Pseudopentaceros richardsoni) near Japan and Russia. The reason for this is that the fishes that are targeted over seamounts are typically long-lived, slow-growing, and slow-maturing. The problem is confounded by the dangers of trawling, which damages seamount surface communities, and the fact that many seamounts are located in international waters, making proper monitoring difficult. Bottom trawling in particular is extremely devastating to seamount ecology, and is responsible for as much as 95% of ecological damage to seamounts.
Corals from seamounts are also vulnerable, as they are highly valued for making jewellery and decorative objects. Significant harvests have been produced from seamounts, often leaving coral beds depleted.
Individual nations are beginning to note the effect of fishing on seamounts, and the European Commission has agreed to fund the OASIS project, a detailed study of the effects of fishing on seamount communities in the North Atlantic. Another project working towards conservation is CenSeam, a Census of Marine Life project formed in 2005. CenSeam is intended to provide the framework needed to prioritise, integrate, expand and facilitate seamount research efforts in order to significantly reduce the unknown and build towards a global understanding of seamount ecosystems, and the roles they have in the biogeography, biodiversity, productivity and evolution of marine organisms.
Possibly the best ecologically studied seamount in the world is Davidson Seamount, with six major expeditions recording over 60,000 species observations. The contrast between the seamount and the surrounding area was well-marked. One of the primary ecological havens on the seamount is its deep sea coral garden, and many of the specimens noted were over a century old. Following the expansion of knowledge on the seamount there was extensive support to make it a marine sanctuary, a motion that was granted in 2008 as part of the Monterey Bay National Marine Sanctuary. Much of what is known about seamounts ecologically is based on observations from Davidson. Another such seamount is Bowie Seamount, which has also been declared a marine protected area by Canada for its ecological richness.
Exploration
The study of seamounts has been hindered for a long time by the lack of technology. Although seamounts have been sampled as far back as the 19th century, their depth and position meant that the technology to explore and sample seamounts in sufficient detail did not exist until the last few decades. Even with the right technology available, only a scant 1% of the total number have been explored, and sampling and information remains biased towards the top . New species are observed or collected and valuable information is obtained on almost every submersible dive at seamounts.
Before seamounts and their oceanographic impact can be fully understood, they must be mapped, a daunting task due to their sheer number. The most detailed seamount mappings are provided by multibeam echosounding (sonar), however after more than 5000 publicly held cruises, the amount of the sea floor that has been mapped remains minuscule. Satellite altimetry is a broader alternative, albeit not as detailed, with 13,000 catalogued seamounts; however this is still only a fraction of the total 100,000. The reason for this is that uncertainties in the technology limit recognition to features or larger. In the future, technological advances could allow for a larger and more detailed catalogue.
Observations from CryoSat-2 combined with data from other satellites has shown thousands of previously uncharted seamounts, with more to come as data is interpreted.
Deep-sea mining
Seamounts are a possible future source of economically important metals. Even though the ocean makes up 70% of Earth's surface area, technological challenges have severely limited the extent of deep sea mining. But with the constantly decreasing supply on land, some mining specialists see oceanic mining as the destined future, and seamounts stand out as candidates.
Seamounts are abundant, and all have metal resource potential because of various enrichment processes during the seamount's life. An example for epithermal gold mineralization on the seafloor is Conical Seamount, located about 8 km south of Lihir Island in Papua New Guinea. Conical Seamount has a basal diameter of about 2.8 km and rises about 600 m above the seafloor to a water depth of 1050 m. Grab samples from its summit contain the highest gold concentrations yet reported from the modern seafloor (max. 230 g/t Au, avg. 26 g/t, n=40). Iron-manganese, hydrothermal iron oxide, sulfide, sulfate, sulfur, hydrothermal manganese oxide, and phosphorite (the latter especially in parts of Micronesia) are all mineral resources that are deposited upon or within seamounts. However, only the first two have any potential of being targeted by mining in the next few decades.
Dangers
Some seamounts have not been mapped and thus pose a navigational danger. For instance, Muirfield Seamount is named after the ship that hit it in 1973. More recently, the submarine USS San Francisco ran into an uncharted seamount in 2005 at a speed of , sustaining serious damage and killing one seaman.
One major seamount risk is that often, in the late of stages of their life, extrusions begin to seep in the seamount. This activity leads to inflation, over-extension of the volcano's flanks, and ultimately flank collapse, leading to submarine landslides with the potential to start major tsunamis, which can be among the largest natural disasters in the world. In an illustration of the potent power of flank collapses, a summit collapse on the northern edge of Vlinder Seamount resulted in a pronounced headwall scarp and a field of debris up to away. A catastrophic collapse at Detroit Seamount flattened its whole structure extensively. Lastly, in 2004, scientists found marine fossils up the flank of Kohala mountain in Hawaii. Subsidation analysis found that at the time of their deposition, this would have been up the flank of the volcano, far too high for a normal wave to reach. The date corresponded with a massive flank collapse at the nearby Mauna Loa, and it was theorized that it was a massive tsunami, generated by the landslide, that deposited the fossils.
| Physical sciences | Oceanic and coastal landforms | null |
167664 | https://en.wikipedia.org/wiki/Epsilon%20Eridani | Epsilon Eridani | Epsilon Eridani (Latinized from ε Eridani), proper name Ran, is a star in the southern constellation of Eridanus. At a declination of −9.46°, it is visible from most of Earth's surface. Located at a distance from the Sun, it has an apparent magnitude of 3.73, making it the third-closest individual star (or star system) visible to the naked eye.
The star is estimated to be less than a billion years old. This relative youth gives Epsilon Eridani a higher level of magnetic activity than the Sun, with a stellar wind 30 times as strong. The star's rotation period is 11.2 days at the equator. Epsilon Eridani is smaller and less massive than the Sun, and has a lower level of elements heavier than helium. It is a main-sequence star of spectral class K2, with an effective temperature of about , giving it an orange hue. It is a candidate member of the Ursa Major moving group of stars, which share a similar motion through the Milky Way, implying these stars shared a common origin in an open cluster.
Periodic changes in Epsilon Eridani's radial velocity have yielded evidence of a giant planet orbiting it, designated Epsilon Eridani b. The discovery of the planet was initially controversial, but most astronomers now regard the planet as confirmed. In 2015 the planet was given the proper name AEgir . The Epsilon Eridani planetary system also includes a debris disc consisting of a Kuiper belt analogue at 70 au from the star and warm dust between about 3 au and 20 au from the star. The gap in the debris disc between 20 and 70 au implies the likely existence of outer planets in the system.
As one of the nearest Sun-like stars, Epsilon Eridani has been the target of several observations in the search for extraterrestrial intelligence. Epsilon Eridani appears in science fiction stories and has been suggested as a destination for interstellar travel. From Epsilon Eridani, the Sun would appear as a star in Serpens, with an apparent magnitude of 2.4.
Nomenclature
ε Eridani, Latinised to Epsilon Eridani, is the star's Bayer designation. Despite being a relatively bright star, it was not given a proper name by early astronomers. It has several other catalogue designations. Upon its discovery, the planet was designated Epsilon Eridani b, following the usual designation system for extrasolar planets.
The planet and its host star were selected by the International Astronomical Union (IAU) as part of the NameExoWorlds competition for giving proper names to exoplanets and their host stars, for some systems that did not already have proper names. The process involved nominations by educational groups and public voting for the proposed names. In December 2015, the IAU announced the winning names were Ran for the star and AEgir for the planet. Those names had been submitted by the pupils of the 8th Grade at Mountainside Middle School in Colbert, Washington, United States. Both names derive from Norse mythology: Rán is the goddess of the sea and Ægir, her husband, is the god of the ocean.
In 2016, the IAU organised a Working Group on Star Names (WGSN) to catalogue and standardise proper names for stars. In its first bulletin of July 2016, the WGSN explicitly recognised the names of exoplanets and their host stars that were produced by the competition. Epsilon Eridani is now listed as Ran in the IAU Catalog of Star Names. Professional astronomers have mostly continued to refer to the star as Epsilon Eridani.
In Chinese, (), meaning Celestial Meadows, refers to an asterism consisting of ε Eridani, γ Eridani, δ Eridani, π Eridani, ζ Eridani, η Eridani, π Ceti, τ1 Eridani, τ2 Eridani, τ3 Eridani, τ4 Eridani, τ5 Eridani, τ6 Eridani, τ7 Eridani, τ8 Eridani and τ9 Eridani. Consequently, the Chinese name for ε Eridani itself is (, the Fourth [Star] of Celestial Meadows.)
Observational history
Cataloguing
Epsilon Eridani has been known to astronomers since at least the 2nd century AD, when Claudius Ptolemy (a Greek astronomer from Alexandria, Egypt) included it in his catalogue of more than a thousand stars. The catalogue was published as part of his astronomical treatise the Almagest. The constellation Eridanus was named by Ptolemy – , and Epsilon Eridani was listed as its thirteenth star. Ptolemy called Epsilon Eridani (here is the number four). This refers to a group of four stars in Eridanus: γ, π, δ and ε (10th–13th in Ptolemy's list). ε is the most western of these, and thus the first of the four in the apparent daily motion of the sky from east to west. Modern scholars of Ptolemy's catalogue designate its entry as "P 784" (in order of appearance) and "Eri 13". Ptolemy described the star's magnitude as 3.
Epsilon Eridani was included in several star catalogues of medieval Islamic astronomical treatises, which were based on Ptolemy's catalogue: in Al-Sufi's Book of Fixed Stars, published in 964, Al-Biruni's Mas'ud Canon, published in 1030, and Ulugh Beg's Zij-i Sultani, published in 1437. Al-Sufi's estimate of Epsilon Eridani's magnitude was 3. Al-Biruni quotes magnitudes from Ptolemy and Al-Sufi (for Epsilon Eridani he quotes the value 4 for both Ptolemy's and Al-Sufi's magnitudes; original values of both these magnitudes are 3). Its number in order of appearance is 786. Ulugh Beg carried out new measurements of Epsilon Eridani's coordinates in his observatory at Samarkand, and quotes magnitudes from Al-Sufi (3 for Epsilon Eridani). The modern designations of its entry in Ulugh Beg's catalogue are "U 781" and "Eri 13" (the latter is the same as Ptolemy's catalogue designation).
In 1598 Epsilon Eridani was included in Tycho Brahe's star catalogue, republished in 1627 by Johannes Kepler as part of his Rudolphine Tables. This catalogue was based on Tycho Brahe's observations of 1577–1597, including those on the island of Hven at his observatories of Uraniborg and Stjerneborg. The sequence number of Epsilon Eridani in the constellation Eridanus was 10, and it was designated ; the meaning is the same as Ptolemy's description. Brahe assigned it magnitude 3.
Epsilon Eridani's Bayer designation was established in 1603 as part of the Uranometria, a star catalogue produced by German celestial cartographer Johann Bayer. His catalogue assigned letters from the Greek alphabet to groups of stars belonging to the same visual magnitude class in each constellation, beginning with alpha (α) for a star in the brightest class. Bayer made no attempt to arrange stars by relative brightness within each class. Thus, although Epsilon is the fifth letter in the Greek alphabet, the star is the tenth-brightest in Eridanus. In addition to the letter ε, Bayer had given it the number 13 (the same as Ptolemy's catalogue number, as were many of Bayer's numbers) and described it as . Bayer assigned Epsilon Eridani magnitude 3.
In 1690 Epsilon Eridani was included in the star catalogue of Johannes Hevelius. Its sequence number in constellation Eridanus was 14, its designation was , and it was assigned magnitude 3 or 4 (sources differ). The star catalogue of English astronomer John Flamsteed, published in 1712, gave Epsilon Eridani the Flamsteed designation of 18 Eridani, because it was the eighteenth catalogued star in the constellation of Eridanus by order of increasing right ascension. In 1818 Epsilon Eridani was included in Friedrich Bessel's catalogue, based on James Bradley's observations from 1750–1762, and at magnitude 4. It also appeared in Nicolas Louis de Lacaille's catalogue of 398 principal stars, whose 307-star version was published in 1755 in the , and whose full version was published in 1757 in , Paris. In its 1831 edition by Francis Baily, Epsilon Eridani has the number 50. Lacaille assigned it magnitude 3.
In 1801 Epsilon Eridani was included in , Joseph Jérôme Lefrançois de Lalande's catalogue of about 50,000 stars, based on his observations of 1791–1800, in which observations are arranged in time order. It contains three observations of Epsilon Eridani. In 1847, a new edition of Lalande's catalogue was published by Francis Baily, containing the majority of its observations, in which the stars were numbered in order of right ascension. Because every observation of each star was numbered and Epsilon Eridani was observed three times, it got three numbers: 6581, 6582 and 6583. (Today numbers from this catalogue are used with the prefix "Lalande", or "Lal".) Lalande assigned Epsilon Eridani magnitude 3. Also in 1801 it was included in the catalogue of Johann Bode, in which about 17,000 stars were grouped into 102 constellations and numbered (Epsilon Eridani got the number 159 in the constellation Eridanus). Bode's catalogue was based on observations of various astronomers, including Bode himself, but mostly on Lalande's and Lacaille's (for the southern sky). Bode assigned Epsilon Eridani magnitude 3. In 1814 Giuseppe Piazzi published the second edition of his star catalogue (its first edition was published in 1803), based on observations during 1792–1813, in which more than 7000 stars were grouped into 24 hours (0–23). Epsilon Eridani is number 89 in hour 3. Piazzi assigned it magnitude 4. In 1918 Epsilon Eridani appeared in the Henry Draper Catalogue with the designation HD 22049 and a preliminary spectral classification of K0.
Detection of proximity
Based on observations between 1800 and 1880, Epsilon Eridani was found to have a large proper motion across the celestial sphere, which was estimated at three arcseconds per year (angular velocity). This movement implied it was relatively close to the Sun, making it a star of interest for the purpose of stellar parallax measurements. This process involves recording the position of Epsilon Eridani as Earth moves around the Sun, which allows a star's distance to be estimated. From 1881 to 1883, American astronomer William L. Elkin used a heliometer at the Royal Observatory at the Cape of Good Hope, South Africa, to compare the position of Epsilon Eridani with two nearby stars. From these observations, a parallax of was calculated. By 1917, observers had refined their parallax estimate to 0.317 arcseconds. The modern value of 0.3109 arcseconds is equivalent to a distance of about .
Circumstellar discoveries
Based on apparent changes in the position of Epsilon Eridani between 1938 and 1972, Peter van de Kamp proposed that an unseen companion with an orbital period of 25 years was causing gravitational perturbations in its position. This claim was refuted in 1993 by Wulff-Dieter Heintz and the false detection was blamed on a systematic error in the photographic plates.
Launched in 1983, the space telescope IRAS detected infrared emissions from stars near to the Sun, including an excess infrared emission from Epsilon Eridani. The observations indicated a disk of fine-grained cosmic dust was orbiting the star; this debris disk has since been extensively studied. Evidence for a planetary system was discovered in 1998 by the observation of asymmetries in this dust ring. The clumping in the dust distribution could be explained by gravitational interactions with a planet orbiting just inside the dust ring.
In 1987, the detection of an orbiting planetary object was announced by Bruce Campbell, Gordon Walker and Stephenson Yang. From 1980 to 2000, a team of astronomers led by Artie P. Hatzes made radial velocity observations of Epsilon Eridani, measuring the Doppler shift of the star along the line of sight. They found evidence of a planet orbiting the star with a period of about seven years. Although there is a high level of noise in the radial velocity data due to magnetic activity in its photosphere, any periodicity caused by this magnetic activity is expected to show a strong correlation with variations in emission lines of ionized calcium (the Ca II H and K lines). Because no such correlation was found, a planetary companion was deemed the most likely cause. This discovery was supported by astrometric measurements of Epsilon Eridani made between 2001 and 2003 with the Hubble Space Telescope, which showed evidence for gravitational perturbation of Epsilon Eridani by a planet.
SETI and proposed exploration
In 1960, physicists Philip Morrison and Giuseppe Cocconi proposed that extraterrestrial civilisations might be using radio signals for communication. Project Ozma, led by astronomer Frank Drake, used the Tatel Telescope to search for such signals from the nearby Sun-like stars Epsilon Eridani and Tau Ceti. The systems were observed at the emission frequency of neutral hydrogen, 1,420 MHz (21 cm). No signals of intelligent extraterrestrial origin were detected. Drake repeated the experiment in 2010, with the same negative result. Despite this lack of success, Epsilon Eridani made its way into science fiction literature and television shows for many years following news of Drake's initial experiment.
In Habitable Planets for Man, a 1964 RAND Corporation study by space scientist Stephen H. Dole, the probability of a habitable planet being in orbit around Epsilon Eridani were estimated at 3.3%. Among the known nearby stars, it was listed with the 14 stars that were thought most likely to have a habitable planet.
William I. McLaughlin proposed a new strategy in the search for extraterrestrial intelligence (SETI) in 1977. He suggested that widely observable events such as nova explosions might be used by intelligent extraterrestrials to synchronise the transmission and reception of their signals. This idea was tested by the National Radio Astronomy Observatory in 1988, which used outbursts of Nova Cygni 1975 as the timer. Fifteen days of observation showed no anomalous radio signals coming from Epsilon Eridani.
Because of the proximity and Sun-like properties of Epsilon Eridani, in 1985 physicist and author Robert L. Forward considered the system as a plausible target for interstellar travel. The following year, the British Interplanetary Society suggested Epsilon Eridani as one of the targets in its Project Daedalus study. The system has continued to be among the targets of such proposals, such as Project Icarus in 2011.
Based on its nearby location, Epsilon Eridani was among the target stars for Project Phoenix, a 1995 microwave survey for signals from extraterrestrial intelligence. The project had checked about 800 stars by 2004 but had not yet detected any signals.
Properties
At a distance of , Epsilon Eridani is the 13th-nearest known star (and ninth nearest solitary star or stellar system) to the Sun as of 2014. Its proximity makes it one of the most studied stars of its spectral type. Epsilon Eridani is located in the northern part of the constellation Eridanus, about 3° east of the slightly brighter star Delta Eridani. With a declination of −9.46°, Epsilon Eridani can be viewed from much of Earth's surface, at suitable times of year. Only to the north of latitude 80° N is it permanently hidden below the horizon. The apparent magnitude of 3.73 can make it difficult to observe from an urban area with the unaided eye, because the night skies over cities are obscured by light pollution.
Epsilon Eridani has an estimated mass of 0.82 solar masses and a radius of 0.738 solar radii. It shines with a luminosity of only 0.34 solar luminosities. The estimated effective temperature is 5,084 K. With a stellar classification of K2 V, it is the second-nearest K-type main-sequence star (after Alpha Centauri B). Since 1943 the spectrum of Epsilon Eridani has served as one of the stable anchor points by which other stars are classified. Its metallicity, the fraction of elements heavier than helium, is slightly lower than the Sun's. In Epsilon Eridani's chromosphere, a region of the outer atmosphere just above the light emitting photosphere, the abundance of iron is estimated at 74% of the Sun's value. The proportion of lithium in the atmosphere is five times less than that in the Sun.
Epsilon Eridani's K-type classification indicates that the spectrum has relatively weak absorption lines from absorption by hydrogen (Balmer lines) but strong lines of neutral atoms and singly ionized calcium (Ca II). The luminosity class V (dwarf) is assigned to stars that are undergoing thermonuclear fusion of hydrogen in their core. For a K-type main-sequence star, this fusion is dominated by the proton–proton chain reaction, in which a series of reactions effectively combines four hydrogen nuclei to form a helium nucleus. The energy released by fusion is transported outward from the core through radiation, which results in no net motion of the surrounding plasma. Outside of this region, in the envelope, energy is carried to the photosphere by plasma convection, where it then radiates into space.
Magnetic activity
Epsilon Eridani has a higher level of magnetic activity than the Sun, and thus the outer parts of its atmosphere (the chromosphere and corona) are more dynamic. The average magnetic field strength of Epsilon Eridani across the entire surface is , which is more than forty times greater than the magnetic-field strength in the Sun's photosphere. The magnetic properties can be modelled by assuming that regions with a magnetic flux of about 0.14 T randomly cover approximately 9% of the photosphere, whereas the remainder of the surface is free of magnetic fields. The overall magnetic activity of Epsilon Eridani shows co-existing and year activity cycles. Assuming that its radius does not change over these intervals, the long-term variation in activity level appears to produce a temperature variation of 15 K, which corresponds to a variation in visual magnitude (V) of 0.014.
The magnetic field on the surface of Epsilon Eridani causes variations in the hydrodynamic behaviour of the photosphere. This results in greater jitter during measurements of its radial velocity. Variations of −1 were measured over a 20 year period, which is much higher than the measurement uncertainty of −1. This makes interpretation of periodicities in the radial velocity of Epsilon Eridani, such as those caused by an orbiting planet, more difficult.
Epsilon Eridani is classified as a BY Draconis variable because it has regions of higher magnetic activity that move into and out of the line of sight as it rotates. Measurement of this rotational modulation suggests that its equatorial region rotates with an average period of 11.2 days, which is less than half of the rotation period of the Sun. Observations have shown that Epsilon Eridani varies as much as 0.050 in V magnitude due to starspots and other short-term magnetic activity. Photometry has also shown that the surface of Epsilon Eridani, like the Sun, is undergoing differential rotation i.e. the rotation period at equator differs from that at high latitude. The measured periods range from 10.8 to 12.3 days. The axial tilt of Epsilon Eridani toward the line of sight from Earth is highly uncertain: estimates range from 24° to 72°.
The high levels of chromospheric activity, strong magnetic field, and relatively fast rotation rate of Epsilon Eridani are characteristic of a young star. Most estimates of the age of Epsilon Eridani place it in the range from 200 million to 800 million years. The low abundance of heavy elements in the chromosphere of Epsilon Eridani usually indicates an older star, because the interstellar medium (out of which stars form) is steadily enriched by heavier elements produced by older generations of stars. This anomaly might be caused by a diffusion process that has transported some of the heavier elements out of the photosphere and into a region below Epsilon Eridani's convection zone.
The X-ray luminosity of Epsilon Eridani is about (). It is more luminous in X-rays than the Sun at peak activity. The source for this strong X-ray emission is Epsilon Eridani's hot corona. Epsilon Eridani's corona appears larger and hotter than the Sun's, with a temperature of , measured from observation of the corona's ultraviolet and X-ray emission. It displays a cyclical variation in X-ray emission that is consistent with the magnetic activity cycle.
The stellar wind emitted by Epsilon Eridani expands until it collides with the surrounding interstellar medium of diffuse gas and dust, resulting in a bubble of heated hydrogen gas (an astrosphere, the equivalent of the heliosphere that surrounds the Sun). The absorption spectrum from this gas has been measured with the Hubble Space Telescope, allowing the properties of the stellar wind to be estimated. Epsilon Eridani's hot corona results in a mass loss rate in Epsilon Eridani's stellar wind that is 30 times higher than the Sun's. This stellar wind generates the astrosphere that spans about and contains a bow shock that lies from Epsilon Eridani. At its estimated distance from Earth, this astrosphere spans 42 arcminutes, which is wider than the apparent size of the full Moon.
Kinematics
Epsilon Eridani has a high proper motion, moving −0.976 arcseconds per year in right ascension (the celestial equivalent of longitude) and 0.018 arcseconds per year in declination (celestial latitude), for a combined total of 0.962 arcseconds per year. The star has a radial velocity of (away from the Sun). The space velocity components of Epsilon Eridani in the galactic co-ordinate system are = , which means that it is travelling within the Milky Way at a mean galactocentric distance of 28.7 kly (8.79 kiloparsecs) from the core along an orbit that has an eccentricity of 0.09. The position and velocity of Epsilon Eridani indicate that it may be a member of the Ursa Major Moving Group, whose members share a common motion through space. This behaviour suggests that the moving group originated in an open cluster that has since diffused. The estimated age of this group is years, which lies within the range of the age estimates for Epsilon Eridani.
During the past million years, three stars are believed to have come within of Epsilon Eridani. The most recent and closest of these encounters was with Kapteyn's Star, which approached to a distance of about roughly 12,500 years ago. Two more distant encounters were with Sirius and Ross 614. None of these encounters are thought to have been close enough to affect the circumstellar disk orbiting Epsilon Eridani.
Epsilon Eridani made its closest approach to the Sun about 105,000 years ago, when they were separated by . Based upon a simulation of close encounters with nearby stars, the binary star system Luyten 726-8, which includes the variable star UV Ceti, will encounter Epsilon Eridani in approximately 31,500 years at a minimum distance of about 0.9 ly (0.29 parsecs). They will be less than 1 ly (0.3 parsecs) apart for about 4,600 years. If Epsilon Eridani has an Oort cloud, Luyten 726-8 could gravitationally perturb some of its comets with long orbital periods.
Planetary system
Debris disc
An infrared excess around Epsilon Eridani was detected by IRAS indicating the presence of circumstellar dust. Observations with the James Clerk Maxwell Telescope (JCMT) at a wavelength of 850 μm show an extended flux of radiation out to an angular radius of 35 arcseconds around Epsilon Eridani, resolving the debris disc for the first time. Higher resolution images have since been taken with the Atacama Large Millimeter Array, showing that the belt is located 70 au from the star with a width of just 11 au. The disc is inclined 33.7° from face-on, making it appear elliptical.
Dust and possibly water ice from this belt migrates inward because of drag from the stellar wind and a process by which stellar radiation causes dust grains to slowly spiral toward Epsilon Eridani, known as the Poynting–Robertson effect. At the same time, these dust particles can be destroyed through mutual collisions. The time scale for all of the dust in the disk to be cleared away by these processes is less than Epsilon Eridani's estimated age. Hence, the current dust disk must have been created by collisions or other effects of larger parent bodies, and the disk represents a late stage in the planet-formation process. It would have required collisions between 11 Earth masses' worth of parent bodies to have maintained the disk in its current state over its estimated age.
The disk contains an estimated mass of dust equal to a sixth of the mass of the Moon, with individual dust grains exceeding 3.5 μm in size at a temperature of about 55 K. This dust is being generated by the collision of comets, which range up to 10 to 30 km in diameter and have a combined mass of 5 to 9 times that of Earth. This is similar to the estimated 10 Earth masses in the primordial Kuiper belt. The disk around Epsilon Eridani contains less than of carbon monoxide. This low level suggests a paucity of volatile-bearing comets and icy planetesimals compared to the Kuiper belt.
The JCMT images show signs of clumpy structure in the belt that may be explained by gravitational perturbation from a planet, dubbed Epsilon Eridani c. The clumps in the dust are theorised to occur at orbits that have an integer resonance with the orbit of the suspected planet. For example, the region of the disk that completes two orbits for every three orbits of a planet is in a 3:2 orbital resonance. The planet proposed to cause these perturbations is predicted to have a semimajor axis of between 40 and 50 au. However, the brightest clumps have since been identified as background sources and the existence of the remaining clumps remains debated.
Dust is also present closer to the star. Observations from NASA's Spitzer Space Telescope suggest that Epsilon Eridani actually has two asteroid belts and a cloud of exozodiacal dust. The latter is an analogue of the zodiacal dust that occupies the plane of the Solar System. One belt sits at approximately the same position as the one in the Solar System, orbiting at a distance of from Epsilon Eridani, and consists of silicate grains with a diameter of 3 μm and a combined mass of about 1018 kg. If the planet Epsilon Eridani b exists then this belt is unlikely to have had a source outside the orbit of the planet, so the dust may have been created by fragmentation and cratering of larger bodies such as asteroids. The second, denser belt, most likely also populated by asteroids, lies between the first belt and the outer comet disk. The structure of the belts and the dust disk suggests that more than two planets in the Epsilon Eridani system are needed to maintain this configuration.
In an alternative scenario, the exozodiacal dust may be generated in the outer belt. This dust is then transported inward past the orbit of Epsilon Eridani b. When collisions between the dust grains are taken into account, the dust will reproduce the observed infrared spectrum and brightness. Outside the radius of ice sublimation, located beyond 10 au from Epsilon Eridani where the temperatures fall below 100 K, the best fit to the observations occurs when a mix of ice and silicate dust is assumed. Inside this radius, the dust must consist of silicate grains that lack volatiles.
The inner region around Epsilon Eridani, from a radius of 2.5 AU inward, appears to be clear of dust down to the detection limit of the 6.5 m MMT telescope. Grains of dust in this region are efficiently removed by drag from the stellar wind, while the presence of a planetary system may also help keep this area clear of debris. Still, this does not preclude the possibility that an inner asteroid belt may be present with a combined mass no greater than the asteroid belt in the Solar System.
Long-period planets
As one of the nearest Sun-like stars, Epsilon Eridani has been the target of many attempts to search for planetary companions. Its chromospheric activity and variability mean that finding planets with the radial velocity method is difficult, because the stellar activity may create signals that mimic the presence of planets. Searches for exoplanets around Epsilon Eridani with direct imaging have been unsuccessful.
Infrared observation has shown there are no bodies of three or more Jupiter masses in this system, out to at least a distance of 500 au from the host star. Planets with similar masses and temperatures as Jupiter should be detectable by Spitzer at distances beyond 80 au. One roughly Jupiter-sized long-period planet has been detected and characterized by both the radial velocity and astrometry methods. Planets more than 150% as massive as Jupiter can be ruled out at the inner edge of the debris disk at 30–35 au.
Planet b (AEgir)
Referred to as Epsilon Eridani b, this planet was announced in 2000, but the discovery remained controversial over roughly the next two decades. A comprehensive study in 2008 called the detection "tentative" and described the proposed planet as "long suspected but still unconfirmed". Many astronomers believed the evidence is sufficiently compelling that they regard the discovery as confirmed. The discovery was questioned in 2013 because a search program at La Silla Observatory did not confirm it exists. Further studies since 2018 have gradually reaffirmed the planet's existence through a combination of radial velocity and astrometry.
Published sources remain in disagreement as to the planet's basic parameters. Recent values for its orbital period range from 7.3 to 7.6 years, estimates of the size of its elliptical orbit—the semimajor axis—range from 3.38 au to 3.53 au, and approximations of its orbital eccentricity range from 0.055 to 0.26.
Initially, the planet's mass was unknown, but a lower limit could be estimated based on the orbital displacement of Epsilon Eridani. Only the component of the displacement along the line of sight to Earth was known, which yields a value for the formula m sin i, where m is the mass of the planet and i is the orbital inclination. Estimates for the value of ranged from 0.60 Jupiter masses to 1.06 Jupiter masses, which sets the lower limit for the mass of the planet (because the sine function has a maximum value of 1). Taking in the middle of that range at 0.78, and estimating the inclination at 30° as was suggested by Hubble astrometry, this yields a value of Jupiter masses for the planet's mass. More recent astrometric studies have found lower masses, ranging from 0.63 to 0.78 Jupiter masses.
Of all the measured parameters for this planet, the value for orbital eccentricity is the most uncertain. The eccentricity of 0.7 suggested by some older studies is inconsistent with the presence of the proposed asteroid belt at a distance of 3 au. If the eccentricity was this high, the planet would pass through the asteroid belt and clear it out within about ten thousand years. If the belt has existed for longer than this period, which appears likely, it imposes an upper limit on Epsilon Eridani b's eccentricity of about 0.10–0.15. If the dust disk is instead being generated from the outer debris disk, rather than from collisions in an asteroid belt, then no constraints on the planet's orbital eccentricity are needed to explain the dust distribution.
Potential habitability
Epsilon Eridani is a target for planet finding programs because it has properties that allow an Earth-like planet to form. Although this system was not chosen as a primary candidate for the now-canceled Terrestrial Planet Finder, it was a target star for NASA's proposed Space Interferometry Mission to search for Earth-sized planets. The proximity, Sun-like properties and suspected planets of Epsilon Eridani have also made it the subject of multiple studies on whether an interstellar probe can be sent to Epsilon Eridani.
The orbital radius at which the stellar flux from Epsilon Eridani matches the solar constant—where the emission matches the Sun's output at the orbital distance of the Earth—is 0.61 au. That is within the maximum habitable zone of a conjectured Earth-like planet orbiting Epsilon Eridani, which currently stretches from about 0.5 to 1.0 au. As Epsilon Eridani ages over a period of 20 billion years, the net luminosity will increase, causing this zone to slowly expand outward to about 0.6–1.4 au. The presence of a large planet with a highly elliptical orbit in proximity to Epsilon Eridani's habitable zone reduces the likelihood of a terrestrial planet having a stable orbit within the habitable zone.
A young star such as Epsilon Eridani can produce large amounts of ultraviolet radiation that may be harmful to life, but on the other hand it is a cooler star than the Sun and so produces less ultraviolet radiation to start with. The orbital radius where the UV flux matches that on the early Earth lies at just under 0.5 au. Because that is actually slightly closer to the star than the habitable zone, this has led some researchers to conclude there is not enough energy from ultraviolet radiation reaching into the habitable zone for life to ever get started around the young Epsilon Eridani.
| Physical sciences | Notable stars | Astronomy |
167667 | https://en.wikipedia.org/wiki/Eurofighter%20Typhoon | Eurofighter Typhoon | The Eurofighter Typhoon is a European multinational twin-engine, supersonic, canard delta wing, multirole fighter. The Typhoon was designed originally as an air-superiority fighter and is manufactured by a consortium of Airbus, BAE Systems and Leonardo that conducts the majority of the project through a joint holding company, Eurofighter Jagdflugzeug GmbH. The NATO Eurofighter and Tornado Management Agency, representing the UK, Germany, Italy and Spain, manages the project and is the prime customer.
The aircraft's development effectively began in 1983 with the Future European Fighter Aircraft programme, a multinational collaboration among the UK, Germany, France, Italy and Spain. Previously, Germany, Italy and the UK had jointly developed and deployed the Panavia Tornado combat aircraft and desired to collaborate on a new project, with additional participating EU nations. However disagreements over design authority and operational requirements led France to leave the consortium to develop the Dassault Rafale independently. A technology demonstration aircraft, the British Aerospace EAP, first flew on 6 August 1986; a Eurofighter prototype made its maiden flight on 27 March 1994. The aircraft's name, Typhoon, was adopted in September 1998 and the first production contracts were also signed that year.
The sudden end of the Cold War reduced European demand for fighter aircraft and led to debate over the aircraft's cost and work share and protracted the Typhoon's development: the Typhoon entered operational service in 2003 and is now in service with the air forces of Austria, Italy, Germany, the United Kingdom, Spain, Saudi Arabia and Oman. Kuwait and Qatar have also ordered the aircraft, bringing the procurement total to 680 aircraft .
The Eurofighter Typhoon is a highly agile aircraft, designed to be an effective dogfighter in combat. Later production aircraft have been increasingly better equipped to undertake air-to-surface strike missions and to be compatible with an increasing number of different armaments and equipment, including Storm Shadow, Brimstone and Marte ER missiles. The Typhoon had its combat debut during the 2011 military intervention in Libya with the UK's Royal Air Force (RAF) and the Italian Air Force, performing aerial reconnaissance and ground-strike missions. The type has also taken primary responsibility for air-defence duties for the majority of customer nations.
Development
Origins
In the UK, as early as 1971, work commenced on the development of a manoeuvrable, tactical aircraft to replace the SEPECAT Jaguar (that was then about to enter service with the RAF). This work soon expanded to include an air superiority capability. A specification titled Air Staff Target 403 (AST 403), in 1972, led to the Hawker P.96, an unbuilt design with a relatively conventional planform, including a separate tail structure, in the late 1970s.
Simultaneously, in West Germany, the requirement for a new fighter had resulted in competition between Dornier, VFW-Fokker and Messerschmitt-Bölkow-Blohm (MBB) for a future Luftwaffe contract known as Taktisches Kampfflugzeug 90 ("Tactical Combat Aircraft 90"; TKF-90). Dornier collaborated with Northrop in the US on an acclaimed, but unsuccessful design, known as the . MBB was successful, with a design including a cranked delta wing, close-coupled-canard controls, and artificial stability.
In 1979, MBB and British Aerospace (BAe) presented a formal proposal to their respective governments for a collaboration, to be known as the European Collaborative Fighter, or European Combat Fighter (ECF). In October 1979, French firm Dassault joined the ECF project. It was at this stage of development the Eurofighter name was first attached to the aircraft. However, the development of three separate prototypes continued: MBB continued to refine its TKF-90 concept, and Dassault produced a design known as the ACX.
In the meantime, while the P.96 would have met the original UK specification, it had been cancelled because it was considered to offer little potential for future upgrades and redevelopment. In addition, there was a feeling within the UK aircraft industry that the P.96 would have been too similar to the McDonnell Douglas F/A-18 Hornet, which was then known to be at an advanced stage of development. The P.96 would not have been available until long after the Hornet, which would therefore likely have met and closed off most potential export markets for the P.96. BAe then produced two new proposals: the P.106B, a single-engined lightweight fighter, superficially resembling the JAS 39 Gripen, and the twin-engine P.110. The RAF rejected the P.106 concept on the grounds it had "half the effectiveness of the two-engined aircraft at two-thirds of the cost".
The ECF project collapsed in 1981 for several reasons, including differing requirements, Dassault's insistence on "design leadership" and the British preference for a new version of the RB199 to power the aircraft versus the French preference for the new Snecma M88.
Consequently, the Panavia partners (MBB, BAe and Aeritalia) launched the Agile Combat Aircraft (ACA) programme in April 1982. BAe designers agreed with the overall configuration of the proposed MBB TKF-90, although they rejected some of its more ambitious features such as engine vectoring nozzles and vented trailing edge controls – a form of boundary layer control. The ACA, like the BAe P.110, had a cranked delta wing, canards and a twin tail. One major external difference was the replacement of the side-mounted engine intakes with a chin intake. The ACA was to be powered by a modified version of the RB199. The German and Italian governments withdrew funding, and the UK Ministry of Defence (MoD) agreed to fund 50% of the cost with the remaining 50% to be provided by industry. MBB and Aeritalia signed up and it was agreed that the aircraft would be produced at two sites: BAe Warton and an MBB factory in Germany. In May 1983, BAe announced a contract with the MoD for the development and production of an ACA demonstrator, the Experimental Aircraft Programme.
In 1983, Italy, Germany, France, the UK and Spain launched the "Future European Fighter Aircraft" (FEFA) programme. The aircraft was to have short take off and landing (STOL) and beyond visual range (BVR) capabilities. In 1984, France reiterated its requirement for a carrier-capable version and demanded a leading role. Italy, West Germany and the UK opted out and established a new EFA programme. In Turin on 2 August 1985, West Germany, the UK and Italy agreed to go ahead with the Eurofighter; and confirmed France, along with Spain, had chosen not to proceed as a member of the project. Despite pressure from France, Spain rejoined the Eurofighter project in early September 1985. France officially withdrew from the project to pursue its own ACX project, which was to become the Dassault Rafale.
By 1986, the programme's cost had reached £180 million. When the EAP programme had started, the cost was supposed to be equally shared by government and industry, but the West German and Italian governments wavered on the agreement and the British government and private finance had to provide £100 million to keep the programme from ending. In April 1986, the British Aerospace EAP was rolled out at BAe Warton. The EAP first flew on 6 August 1986. The Eurofighter bears a strong resemblance to the EAP. Design work continued over the next five years using data from the EAP. Initial requirements were: UK: 250 aircraft, Germany: 250, Italy: 165 and Spain: 100. The share of the production work was divided among the countries in proportion to their projected procurement – BAe (33%), DASA (33%), Aeritalia (21%), and Construcciones Aeronáuticas SA (CASA) (13%).
The Munich-based Eurofighter Jagdflugzeug GmbH was established in 1986 to manage development of the project and EuroJet Turbo GmbH, the alliance of Rolls-Royce, MTU Aero Engines, FiatAvio (now Avio) and ITP for development of the EJ200. The aircraft was known as Eurofighter EFA from the late 1980s until it was renamed EF 2000 in 1992.
By 1990, the selection of the aircraft's radar had become a major stumbling-block. The UK, Italy and Spain supported the Ferranti Defence Systems-led ECR-90, while Germany preferred the APG-65-based MSD2000 (a collaboration between Hughes, AEG and GEC-Marconi). An agreement was reached after UK Defence Secretary Tom King assured his West German counterpart Gerhard Stoltenberg that the British government would approve the project and allow the GEC subsidiary Marconi Electronic Systems to acquire Ferranti Defence Systems from its parent, the Ferranti Group, which was in financial and legal difficulties. GEC thus withdrew its support for the MSD2000.
Delays
The financial burdens placed on Germany by reunification caused Helmut Kohl to make an election promise to cancel the Eurofighter. In early to mid-1991 German Defence Minister Volker Rühe sought to withdraw Germany from the project in favour of using Eurofighter technology in a cheaper, lighter plane. Because of the amount of money already spent on development, the number of jobs dependent on the project, and the binding commitments on each partner government, Kohl was unable to withdraw; "Rühe's predecessors had locked themselves into the project by a punitive penalty system of their own devising."
In 1995 concerns over workshare appeared. Since the formation of Eurofighter the workshare split had been agreed at 33/33/21/13 (United Kingdom/Germany/Italy/Spain) based on the number of units being ordered by each contributing nation, all the nations then reduced their orders. The UK cut its orders from 250 to 232, Germany from 250 to 140, Italy from 165 to 121 and Spain from 100 to 87. According to these order levels the workshare split should have been 39/24/22/15 UK/Germany/Italy/Spain, however Germany was unwilling to give up such a large amount of work. In January 1996, after much negotiation between German and UK partners, a compromise was reached whereby Germany would purchase another 40 aircraft. The workshare split was therefore UK 37.42%, Germany 29.03%, Italy 19.52% and Spain 14.03%.
At the 1996 Farnborough Airshow the UK announced funding for the construction phase of the project. On 22 December 1997 the defence ministers of the four partner nations signed the contract for production of the Eurofighter.
Testing
The maiden flight of the Eurofighter prototype took place in Bavaria on 27 March 1994, flown by DASA chief test pilot Peter Weger. In December 2004, Eurofighter Typhoon IPA4 began three months of Cold Environmental Trials (CET) at the Vidsel Air Base in Sweden, the purpose of which was to verify the operational behaviour of the aircraft and its systems in temperatures between −25 and 31 °C. The maiden flight of Instrumented Production Aircraft7 (IPA7), the first fully equipped Tranche2 aircraft, took place from EADS' Manching airfield on 16 January 2008.
Procurement, production and costs
The first production contract was signed on 30 January 1998 between Eurofighter GmbH, Eurojet and NETMA. The procurement totals were as follows: the UK 232, Germany 180, Italy 121, and Spain 87. Production was again allotted according to procurement: BAe (37.42%), DASA (29.03%), Aeritalia (19.52%), and CASA (14.03%).
On 2 September 1998, a naming ceremony was held at Farnborough, United Kingdom. This saw the Typhoon name formally adopted, initially for export aircraft only. The name continues the storm theme started by the Panavia Tornado. This was reportedly resisted by Germany; the Hawker Typhoon was a fighter-bomber aircraft used by the RAF during the Second World War to attack German targets. The name "Spitfire II" (after the famous British Second World War fighter, the Supermarine Spitfire) had also been considered and rejected for the same reason early in the development programme. In September 1998, contracts were signed for production of 148 Tranche1 aircraft and procurement of long lead-time items for Tranche2 aircraft. In March 2008, the final Tranche1 aircraft was delivered to the German Air Force. On 21 October 2008, the RAF's first two of 91 Tranche2 aircraft, were delivered to RAF Coningsby.
In July 2009, after almost 2 years of negotiations, the planned Tranche 3 purchase was split into 2 parts and the Tranche 3A contract was signed by the partner nations. The "Tranche 3B" order did not go ahead.
The Eurofighter Typhoon is unique in modern combat aircraft in that there are four separate assembly lines. Each partner company assembles its own national aircraft, but builds the same parts for all aircraft (including exports); Premium AEROTEC (main centre fuselage), EADS CASA (right wing, leading edge slats), BAE Systems (BAE) (front fuselage (including foreplanes), canopy, dorsal spine, tail fin, inboard flaperons, rear fuselage section) and Leonardo (left wing, outboard flaperons, rear fuselage sections).
Production is divided into three tranches (see table below). Tranches are a production/funding distinction, and do not imply an incremental increase in capability with each tranche. Tranche3 are based on late Tranche2 aircraft with improvements added. Tranche3 was split into A and B parts. Tranches were further divided up into production standard/capability blocks and funding/procurement batches, though these did not coincide, and are not the same thing; e.g., the Eurofighter designated FGR4 by the RAF is a Tranche 1, block 5. Batch1 covered block 1, but batch2 covered blocks 2, 2B and 5. On 25 May 2011 the 100th production aircraft, ZK315, rolled off the production line at Warton.
In 1985 the estimated cost of 250 UK aircraft was £7 billion. By 1997 the estimated cost was £17 billion; by 2003, £20 billion, and the in-service date (2003, defined as the date of delivery of the first aircraft to the RAF) was 54 months late. After 2003, the MoD refused to release updated cost-estimates on the grounds of commercial sensitivity. However, in 2011, the National Audit Office estimated the UK's "assessment, development, production and upgrade costs eventually hit £22.9 billion" and total programme costs would reach £37billion.
By 2007, Germany estimated the system cost (aircraft and training, plus spare parts) at €120million and said it was in perpetual increase. On 17 June 2009, Germany ordered 31 aircraft of Tranche 3A for €2.8billion, leading to a system cost of €90million per aircraft. The UK's Committee of Public Accounts reported that mismanagement of the project had helped increase the cost of each aircraft by seventy-five percent. The Spanish MoD put the cost of their Typhoon project up to December 2010 at €11.718billion, up from an original €9.255billion and implying a system cost for their 73 aircraft of €160million.
On 31 March 2009, a Eurofighter Typhoon fired an AIM-120 AMRAAM whilst having its radar in passive mode for the first time; the necessary target data for the missile was acquired by the radar of a second Eurofighter Typhoon and transmitted using the Multifunctional Information Distribution System (MIDS). The entire Typhoon fleet passed the 500,000 flying hours milestone in 2018. As of August 2019, a total of 623 orders had been received.
In July 2016, the ten-year Typhoon Total Availability Enterprise (TyTAN) support deal between the RAF and industry partners BAE and Leonardo was announced that aims to reduce the Typhoon's per-hour operating cost by 30 to 40 percent. This should equate to a saving of at least £550million ($712million), which "will be recycled into the programme" and, according to BAE, will result in the Typhoon having a per-hour operating cost "equivalent to a F-16". By 2022 it was estimated that savings would be "over £500million."
Upgrades
In 2000, the UK selected the Meteor from MBDA as the long range air-to-air missile armament for its Typhoons with an in-service date (ISD) of December 2011. In December 2002, France, Germany, Spain and Sweden joined the British in a $1.9bn contract for Meteor on Typhoon, the Dassault Rafale and the Saab Gripen. The protracted contract negotiations pushed the ISD to August 2012, and it was further put back by Eurofighter's failure to make trials aircraft available to the Meteor partners. In 2014 the "second element of the Phase1 Enhancements package known as 'P1Eb'" was announced, allowing "Typhoon to realise both its air-to-air and air-to-ground capability to full effect".
In 2011 Flight International reported that budgetary pressures being encountered by the four original partner nations were limiting upgrades. For example, the four original partner nations were reluctant at that stage to fund enhancements that extend the aircraft's air-to-ground capability, such as integration of the MBDA Storm Shadow cruise missile.
Tranche 3 aircraft ESM/ECM enhancements have focused on improving radiating jamming power with antenna modifications, while EuroDASS is reported to offer a range of new capabilities, including the addition of a digital receiver, extending band coverage to low frequencies (VHF/UHF) and introducing an interferometric receiver with extremely precise geolocation functionalities. On the jamming side, EuroDASS is looking to low-band (VHF/UHF) jamming, more capable antennae, new ECM techniques, while protection against missile is to be enhanced through a new passive MWS in addition to the active devices already on board the aircraft. The latest support for self-protection will however originate from the new AESA radar which is to replace the Captor system, providing in a spiralled programme with passive, active and cyberwarfare RF capabilities. Selex ES has developed a self-contained expendable Digital Radio Frequency Memory (DRFM) jammer for fast jet aircraft known as BriteCloud which is being studied for integration on the Typhoon.
Eurojet is attempting to find funding to test thrust vectoring control (TVC) nozzles on a flight demonstrator. In April 2014, BAE announced new wind tunnel tests to assess the aerodynamic characteristics of conformal fuel tanks (CFTs). The CFTs, which can be fitted to any Tranche 3 aircraft, could carry 1,500 litres each to increase the Typhoon's combat radius by a factor of 25% to 1,500 n miles (2,778 km).
BAE has completed development of its Striker II Helmet-Mounted Display that builds on the capabilities of the original Striker Helmet-Mounted Display, which is already in service on the Typhoon. Striker II features a new display with more colour and can transition between day and night seamlessly eliminating the need for separate night vision goggles. In addition, the helmet can monitor the pilot's exact head position so it always knows exactly what information to display. The system is compatible with ANR, a 3-D audio threats system and 3-D communications; these are available as customer options. In 2015, BAE was awarded a £1.7million contract to study the feasibility of a common weapon launcher that could be capable of carrying multiple weapons and weapon types on a single pylon.
Also in 2015, Airbus flight tested a package of aerodynamic upgrades for the Eurofighter known as the Aerodynamic Modification Kit (AMK) consisting of reshaped (delta) fuselage strakes, extended trailing-edge flaperons and leading-edge root extensions. This increases wing lift by 25% resulting in an increased turn rate, tighter turning radius, and improved nose-pointing ability at low speed with angle of attack values around 45% greater and roll rates up to 100% higher. Eurofighter's Laurie Hilditch said these improvements should increase subsonic turn rate by 15% and give the Eurofighter the sort of "knife-fight in a phone box" turning capability enjoyed by rivals such as Boeing's F/A-18E/F or the Lockheed Martin F-16, without sacrificing the transonic and supersonic high-energy agility inherent to its delta wing-canard configuration. Eurofighter Project Pilot Germany Raffaele Beltrame said: "The handling qualities appeared to be markedly improved, providing more manoeuvrability, agility and precision while performing tasks representative of in-service operations. And it is extremely interesting to consider the potential benefits in the air-to-surface configuration thanks to the increased variety and flexibility of stores that can be carried."
In April 2016, Finmeccanica (now Leonardo) demonstrated the air-to-ground capabilities of its Mode5 Reverse-Identification friend or foe (IFF) system which showed that it is possible to give pilots the ability to distinguish between friendly and enemy platforms in a simple fashion using the aircraft's existing transponder. Finmeccanica said NATO is considering the system as a short- to mid-term solution for air-to-surface identification of friendly forces and thus avoid collateral damages due to friendly fire during close air support operations.
UK Project Centurion upgrades
With the confirmed retirement date of March 2019 for RAF Tornado GR4s, in 2014 the UK commenced an upgrade programme that would eventually become the £425 million Project Centurion to ensure the Typhoon was able to assume the precision strike duties of the ageing Tornado. The upgrade was delivered under different phases:
Phase 0 – initial multirole upgrades.
Phase 1/P2EA – MBDA Meteor integration and initial Storm Shadow Capability.
Phase 2/P3EA – Full Storm Shadow capability as well as Brimstone integration.
Phase 1 standard aircraft were used operationally for the first time as part of Operation Shader over Iraq and Syria in 2018. On 18 December 2018 the RAF approved release to service for the full Project Centurion package.
Proposed upgrade for German Tornado replacement
On 24 April 2018, Airbus announced its offer to replace Germany's Panavia Tornado fleet, proposing the integration of new weaponry, performance enhancements and additional capabilities to the Eurofighter Typhoon. This is similar to that being performed as part of the UK's Project Centurion. Integration of air-to-ground weapons already has begun on German Typhoons as part of Project Odin. Among the weapons being offered are the Kongsberg Joint Strike Missile for the anti-ship mission and the Taurus cruise missile.
The consortium is keen to make use of the engine's growth potential to boost thrust by around 15% as well as improve fuel efficiency and range. This will be combined with a new design and enlarged 1,800-litre fuel tank. The aircraft currently is fitted with 1,000-litre fuel tanks. Other modifications will include the Aerodynamic Modification Kit, test flown in 2014, to improve maneuverability and handling, particularly with heavy weapon loads. Eurofighter says it is comfortable with delivering integration of the U.S. B61 nuclear weapon onto the aircraft, a process that requires U.S. certification. Paltzo said he was confident the U.S. government would not use the certification requirements of the weapon as "leverage" to force Germany towards a U.S. platform. A next-generation electronic warfare suite has been planned by the four-country consortium.
In November 2019, Airbus proposed a SEAD capability for the aircraft, a role which is currently performed by the Tornado ECR in German service. The Typhoon ECR would be configured with two Escort Jammer pods under the wings and two Emitter Location Systems at the wing tips. Armament configuration would include four MBDA Meteor, two IRIS-T and six SPEAR-EW in addition to three drop tanks.
On 5 November 2020, the German government approved an order for 38 Tranche 4 with ground attack capabilities for the replacement of Tranche 1 units in German service.
The Luftwaffe ordered 15 ECR electronic warfare aircraft for the Luftgestützte Wirkung im Elektromagnetischen Spektrum (luWES) requirement in March 2022. The 15 ECR EW aircraft will be transformed from existing German Typhoons and will be equipped with AGM-88E AARGM Anti-radiation missiles. The aircraft are expected to be NATO-certified by 2030.
The Tranche 4PE is a further development package aiming at integrating improved missiles (Meteor, Taurus, AMRAAM, GBU, JDAM).
Replacement
Germany is to replace the Eurofighter with the New Generation Fighter (NGF), co-developed with France and Spain. The Global Combat Air Programme is a ‘6th Generation’ fighter envisioned as a replacement for the RAF and Italian Air Force (AM), part of the UK's wider Future Combat Air System.
Design
Airframe overview
The Typhoon is a highly agile aircraft both at supersonic and at low speeds, achieved through having an intentionally relaxed stability design. It has a quadruplex digital fly-by-wire control system providing artificial stability, as manual operation alone could not compensate for the inherent instability. The fly-by-wire system is described as "carefree", and prevents the pilot from exceeding the permitted manoeuvre envelope. Roll control is primarily achieved by use of the ailerons. Pitch control is by operation of the canards and ailerons, because the canards disturb airflow to inner elevons (flaps). The yaw control is done by a large, single rudder. Engines are fed by a chin double intake ramp situated below a splitter plate.
The Typhoon features lightweight construction (82% composites consisting of 70% carbon fibre composite materials and 12% glass fibre reinforced composites) with an estimated lifespan of 6,000 flying hours.
Radar signature reduction features
Although it was not designated a stealth fighter, measures were taken to reduce the Typhoon's radar cross section (RCS), especially from the frontal aspect; An example of these measures is that the Typhoon has jet inlets that conceal the front of the engines (a strong radar target) from radar. Many important potential radar targets, such as the wing, canard and fin leading edges, are highly swept so they will reflect radar energy well away from the front. Some external weapons are mounted semi-recessed into the aircraft, partially shielding these missiles from incoming radar waves. In addition radar-absorbent materials (RAM), developed primarily by EADS/DASA, coat many of the most significant reflectors, such as the wing leading edges, the intake edges and interior, the rudder surrounds, and strakes.
The manufacturers carried out tests on the early Eurofighter prototypes to optimise the low observability characteristics of the aircraft from the early 1990s. Testing at Warton on the DA4 prototype measured the RCS of the aircraft and investigated the effects of a variety of RAM coatings and composites. Another measure to reduce the likelihood of discovery is the use of passive sensors (PIRATE IRST), which minimises the radiation of treacherous electronic emissions. While canards generally have poor stealth characteristics from side because of corner to hull, the flight control system is designed to maintain the elevon trim and canards at an angle at which they have the smallest RCS.
Cockpit
The Typhoon features a glass cockpit without any conventional instruments. It incorporates three full colour multi-function head-down displays (MHDDs) (the formats on which are manipulated by means of softkeys, XY cursor, and voice (Direct Voice Input or DVI) command), a wide angle head-up display (HUD) with forward-looking infrared (FLIR), a voice and hands-on throttle and stick (Voice+HOTAS), a Helmet Mounted Symbology System (HMSS), a MIDS, a manual data-entry facility (MDEF) located on the left glareshield and a fully integrated aircraft warning system with a dedicated warnings panel (DWP). Reversionary flying instruments, lit by LEDs, are located under a hinged right glareshield. Access to the cockpit is normally via either a telescopic integral ladder or an external version. The integral ladder is stowed in the port side of the fuselage, below the cockpit.
User needs were given a high priority in the cockpit's design; both layout and functionality was developed with feedback and assessments from military pilots and a specialist testing facility. The aircraft is controlled by means of a centre stick (or control stick) and left hand throttles, designed on a Hand on Throttle and Stick (HOTAS) principle to lower pilot workload. Emergency escape is provided by a Martin-Baker Mk.16A ejection seat, with the canopy being jettisoned by two rocket motors. The HMSS was delayed by years but should have been operational by late 2011. Standard g-force protection is provided by the full-cover anti-g trousers (FCAGTs), a specially developed g suit providing sustained protection up to nine g. German and Austrian Air Force pilots wear a hydrostatic g-suit called Libelle (dragonfly) Multi G Plus instead, which also provides protection to the arms, theoretically giving more complete g tolerance.
In the event of pilot disorientation, the Flight Control System allows for rapid and automatic recovery by the simple press of a button. On selection of this cockpit control the FCS takes full control of the engines and flying controls, and automatically stabilises the aircraft in a wings level, gentle climbing attitude at 300 knots, until the pilot is ready to retake control. The aircraft also has an Automatic Low-Speed Recovery system (ALSR) which prevents it from departing from controlled flight at very low speeds and high angle of attack. The FCS system is able to detect a developing low-speed situation and to raise an audible and visual low-speed cockpit warning. This gives the pilot sufficient time to react and to recover the aircraft manually. If the pilot does not react, however, or if the warning is ignored, the ALSR takes control of the aircraft, selects maximum dry power for the engines and returns the aircraft to a safe flight condition. Depending on the attitude, the FCS employs an ALSR "push", "pull" or "knife-over" manoeuvre.
The Typhoon Direct Voice Input (DVI) system uses a speech recognition module (SRM), developed by Smiths Aerospace and Computing Devices. It was the first production DVI system used in a military cockpit. DVI provides the pilot with an additional natural mode of command and control over approximately 26 non-critical cockpit functions, to reduce pilot workload, improve aircraft safety, and expand mission capabilities. An important step in the development of the DVI occurred in 1987 when Texas Instruments completed the TMS-320-C30, a digital signal-processor, enabling reductions in the size and system complexity required. The project was given the go-ahead in July 1997, with development carried out on the Eurofighter Active Cockpit Simulator at Warton. The DVI system is speaker-dependent, requiring each pilot to create a template. It is not used for safety-critical or weapon-critical tasks, such as weapon release or lowering of the undercarriage. Voice commands are confirmed by visual or aural feedback, and serves to reduce pilot workload. All functions are also achievable by means of a conventional button-press or soft-key selections; functions include display management, communications, and management of various systems. EADS Defence and Security in Spain has worked on a new non-template DVI module to allow for continuous speech recognition, speaker voice recognition with common databases (e.g. British English, American English, etc.) and other improvements.
BAE Systems has been awarded a contract to develop new touch screen displays in the cockpit and enhance data processing capability for Eurofighter Typhoon.
Avionics
Navigation is via both GPS and an inertial navigation system. The Typhoon can use Instrument Landing System (ILS) for landing in poor weather. The aircraft also features an enhanced ground proximity warning system (GPWS) based on the TERPROM Terrain Referenced Navigation (TRN) system used by the Panavia Tornado. MIDS provides a Link 16 data link.
The aircraft employs a sophisticated and highly integrated Defensive Aids Sub-System named Praetorian (formerly Euro-DASS) Praetorian monitors and responds automatically to air and surface threats, provides an all-round prioritised assessment, and can respond to multiple threats simultaneously. Threat detection methods include a Radar warning receiver (RWR), a missile warning system (MWS) and a laser warning receiver (LWR, only on UK Typhoons). Protective countermeasures consist of chaff, flares, an electronic countermeasures (ECM) suite and a towed radar decoy (TRD). The ESM-ECM and MWS consists of 16 antenna array assemblies and 10 radomes.
Traditionally each sensor in an aircraft is treated as a discrete source of information; however this can result in conflicting data and limits the scope for the automation of systems, hence increasing pilot workload. To overcome this, the Typhoon employs sensor fusion techniques. In the Typhoon, fusion of all data sources is achieved through the Attack and Identification System, or AIS. This combines data from the major on-board sensors along with any information obtained from off-board platforms such as AWACS and MIDS. Additionally the AIS integrates all the other major offensive and defensive systems (e.g. DASS & communications). The AIS physically comprises two essentially separate units: the Attack Computer (AC) and the Navigation Computer (NC).
By having a single source of information, pilot workload should be reduced by removing the possibility of conflicting data and the need for cross-checking, improving situational awareness and increasing systems automation. In practice the AIS should allow the Eurofighter to identify targets at distances in excess of and acquire and auto-prioritise them at over . In addition the AIS offers the ability to automatically control emissions from the aircraft, so called EMCON (from EMissions CONtrol). This should aid in limiting the detectability of the Typhoon by opposing aircraft further reducing pilot workload.
In 2017 a RAF Eurofighter Typhoon demonstrated interoperability with the F-35B using its Multifunction Advanced Data Link (MADL) in a two-week trial known as Babel Fish III, in the Mojave Desert. This was achieved by translating the MADL messages into Link 16 format, thus allowing an F-35 in stealth mode to communicate directly with the Typhoon.
Radar and sensors
Captor radar
The Euroradar Captor is a mechanical multi-mode pulse Doppler radar designed for the Eurofighter Typhoon. The Eurofighter operates automatic Emission Controls (EMCON) to reduce the electro-magnetic emissions of the current CAPTOR mechanically scanned radar. The Captor-M has three working channels, one intended for classification of jammer and for jamming suppression. A succession of radar software upgrades have enhanced the air-to-air capability of the radar. These upgrades have included the R2P programme (initially UK only, and known as T2P when 'ported' to the Tranche2 aircraft) which is being followed by R2Q/T2Q. R2P was applied to eight German Typhoons deployed on Red Flag Alaska in 2012.
Captor-E AESA variant
The Captor-E is an AESA derivative of the original Captor radar, also known as CAESAR (from Captor Active Electronically Scanned Array Radar) being developed by the Euroradar Consortium, led by Selex ES.
Synthetic Aperture Radar is expected to be fielded as part of the AESA radar upgrade which will give the Eurofighter an all-weather ground attack capability. The conversion to AESA will also give the Eurofighter a low probability of intercept radar with improved jam resistance. These include an innovative design with a gimbal to meet RAF requirements for a wider scan field than a fixed AESA. The coverage of a fixed AESA is limited to 120° in azimuth and elevation. A senior EADS radar expert has claimed that Captor-E is capable of detecting an F-35 from roughly away.
The first flight of a Eurofighter equipped with a "mass model" of the Captor-E occurred in late February 2014, with flight tests of the actual radar beginning in July of that year. On 19 November 2014 the contract to upgrade to the Captor-E was signed at the offices of EuroRadar lead Selex ES in Edinburgh, in a deal worth €1bn. Kuwait became the launch customer for the Captor-E active electronically scanned array radar in April 2016. Germany has announced the intention to integrate the AESA Captor-E into their Typhoons, beginning in 2022.
In January 2024, it was announced that the first European Common Radar System (ECRS) MK2 had been fitted to an RAF operated test and evaluation Typhoon ZK355 (BS116), at BAE Systems' site Warton. Leonardo and DE&S announced that the initial flight was scheduled to take place later in 2024.
The AESA radar program for the Eurofighter is now split into three European Common Radar System (ECRS) variants:
ECRS Mk0: also called Radar One Plus, this is the baseline Captor-E model which was developed by Leonardo. Hardware development is complete and it is fitted to aircraft delivered to Kuwait and Qatar.
ECRS Mk1: an upgrade of the Mk0 being developed by Hensoldt/Indra, for Germany and Spain. It will be retrofitted to their Tranche 2 and 3 aircraft, and also fitted to both countries' new Tranche 4 models.
ECRS Mk2: also known as Radar Two, a different version developed from the ARTS and Bright Adder demonstrators, and from the Gripen E's ES-05 Raven radar. With electronic warfare/attack capabilities, it is being developed by Leonardo for the RAF, and integrated by BAE Systems. It will initially be applied to Tranche 3 aircraft, but the RAF may upgrade Tranche 2 later. Italy has joined development of the ECRS Mk2, which was part of the Typhoon offer to Finland for its HX Fighter Program.
IRST
The Passive Infra-Red Airborne Track Equipment (PIRATE) system is an infrared search and track (IRST) system mounted on the port side of the fuselage, forward of the windscreen. Selex ES is the lead contractor which, along with Thales Optronics (system technical authority) and Tecnobit of Spain, make up the EUROFIRST consortium responsible for the system's design and development. Eurofighters starting with Tranche1 block5 have the PIRATE. The first Eurofighter Typhoon with PIRATE-IRST was delivered to the Italian Aeronautica Militare in August 2007. More advanced targeting capabilities can be provided with the addition of a targeting pod such as the Litening pod.
When used with the radar in an air-to-air role, it functions as an infrared search and track system, providing passive target detection and tracking. The system can detect variations in temperature at a long range. It also provides a navigation and landing aid. PIRATE is linked to the pilot's helmet-mounted display. It allows the detection of both hot exhaust plumes of jet engines and surface heating caused by friction; processing techniques further enhance the output, giving a near-high resolution image of targets. The output can be directed to any of the Multi-function Head Down Displays, and can also be overlaid on both the Helmet Mounted Sight and the Head Up Display.
Up to 200 targets can be simultaneously tracked using one of several different modes; Multiple Target Track (MTT), Single Target Track (STT), Single Target Track Ident (STTI), Sector Acquisition and Slaved Acquisition. In MTT mode the system will scan a designated volume space looking for potential targets. In STT mode PIRATE will provide tracking of a single designated target. An addition to this mode, STT Ident allows for visual identification of the target, the resolution being superior to CAPTOR's. When in Sector Acquisition mode PIRATE will scan a volume of space under direction of another onboard sensor such as CAPTOR. In Slave Acquisition, off-board sensors are used with PIRATE being commanded by data obtained from an AWACS or other source. When a target is found in either of these modes, PIRATE will automatically designate it and switch to STT.
Once a target has been tracked and identified, PIRATE can be used to cue an appropriately equipped short range missile, i.e. a missile with a high off-boresight tracking capability such as ASRAAM. Additionally the data can be used to augment that of Captor or off-board sensor information via the AIS. This should enable the Typhoon to overcome severe ECM environments and still engage its targets. PIRATE also has a passive ranging capability although the system remains limited when providing passive firing solutions, as it does not have a laser rangefinder.
Engines
The Eurofighter Typhoon is fitted with two Eurojet EJ200 engines, each capable of providing up to 60 kN (13,500 lbf) of dry thrust and >90 kN (20,230 lbf) with afterburners. Using the "war" setting, dry thrust increases by 15% to 69 kN per engine and afterburners by 5% to 95 kN per engine and for a few seconds, up to 102 kN thrust without damaging the engine. The EJ200 engine combines the leading technologies from each of the four European companies, using advanced digital control and health monitoring; wide chord aerofoils and single crystal turbine blades; and a convergent / divergent exhaust nozzle to give high thrust-to-weight ratio, multimission capability, supercruise performance, low fuel consumption, low cost of ownership, modular construction and growth potential.
The Typhoon is capable of supersonic cruise without using afterburners (referred to as supercruise). Air Forces Monthly gives a maximum supercruise speed of Mach 1.1 for the RAF FGR4 multirole version, however in a Singaporean evaluation, a Typhoon managed to supercruise at Mach 1.21 on a hot day with a combat load. Eurofighter states that the Typhoon can supercruise at Mach 1.5. As with the F-22, the Eurofighter can launch weapons while under supercruise to extend their ranges via this "running start". In 2007, the EJ200 engine had accumulated 50,000 Engine Flying Hours in service with the four Nation Air Forces (Germany, UK, Spain and Italy).
The EJ200 engine has the potential to be fitted with a thrust vectoring control (TVC) nozzle, which the Eurofighter and Eurojet consortium have been actively developing and testing, primarily for export but also for future upgrades of the fleet. TVC could reduce fuel burn on a typical Typhoon mission by up to 5%, as well as increase available thrust in supercruise by up to 7% and take-off thrust by 2%. Clemens Linden, Eurojet TURBO GmbH CEO, speaking at the 2018 Farnborough International Air Show, said "15 per cent more thrust would allow pilots to operate with a heavily loaded aircraft in the battlespace with the same performance levels as they have today. The technology insertion also provides more persistence – giving aircraft longer range or longer loitering time. To achieve more thrust we would increase the airflow and pressure ratios of the high and low pressure compressors and run higher temperatures in the turbines by using the latest generation single crystal turbine blade materials. And with higher aerodynamic efficiencies we can achieve a lower fuel burn. A third area of improvement would be the engine exhaust nozzle which would be upgraded with the installation of a 2-parametric version allowing independent and optimized adjustment of the throat and exit area at all flight conditions, providing fuel burn advantages. The technologies for the different components are at a Technology readiness level of between 7 and 9. The nozzle has been at ITP in Spain on a test bed for 400 hours."
Performance
The Typhoon's combat performance, compared to the F-22 Raptor and F-35 Lightning II fighters and the French Dassault Rafale, has been the subject of much discussion. In March 2005, United States Air Force Chief of Staff General John P. Jumper, then the only person to have flown both the Eurofighter Typhoon and the Raptor, said:
In the 2005 Singapore evaluation, the Typhoon won all three combat tests, including one in which a single Typhoon defeated three RSAF F-16s, and reliably completed all planned flight tests. In July 2009, Former Chief of Air Staff for the RAF, Air Chief Marshal Sir Glenn Torpy, said that "The Eurofighter Typhoon is an excellent aircraft. It will be the backbone of the Royal Air Force along with the JSF."
In July 2007, Indian Air Force Su-30MKI fighters participated in the Indra-Dhanush exercise with the RAF's Typhoon. This was the first time the two fighters had taken part in such an exercise. The IAF did not allow their pilots to use the MKI's radar during the exercise to protect the highly classified Russian N011M Bars. The IAF pilots were impressed by the Typhoon's agility. In 2015, Indian Air Force Su-30MKIs again participated in a Indra-Dhanush exercise with RAF Typhoons.
Armament
Air to ground
The Typhoon is a multi-role fighter with maturing air-to-ground capabilities. The initial absence of air-to-ground capability is believed to have been a factor in the type's rejection from Singapore's fighter competition in 2005. At the time it was claimed that Singapore was concerned about the delivery timescale and the ability of the Eurofighter partner nations to fund the required capability packages. Tranche1 aircraft could drop laser-guided bombs in conjunction with third-party designators but the anticipated deployment of Typhoon to Afghanistan meant that the UK required self-contained bombing capabilities before the other partners. In 2006 the UK embarked on the £73m Change Proposal 193 (CP193) to give an "austere" air-to-surface capability using GBU-16 Paveway II and Rafael/Ultra Electronics Litening III laser designator for Tranche1 Block5 aircraft. Aircraft with this upgrade were designated Typhoon FGR4 by the RAF.
Similar capability was added to Tranche 2 aircraft on the main development pathway as part of the Phase1 Enhancements. P1Ea (SRP10) entered service in 2013 Q1 and added the use of Paveway IV, EGBU16 and the cannon against surface targets. P1Eb (SRP12) added full integration with GPS bombs such as GBU-10 Paveway II, GBU-16 Paveway II, Paveway IV and a new real-time operating system that allows multiple targets to be attacked in a single run. This new system will form the basis for future weapons integration by individual countries under the Phase2 Enhancements. The Storm Shadow and KEPD 350 (Taurus) cruise missiles, together with the Meteor Beyond Visual Range Air-to-Air missile flight trials had been successfully completed by January 2016. The Storm Shadow and Meteor firings are part of the Phase2 Enhancement (P2E) programme which introduced a range of new and improved long range attack capabilities to Typhoon. In addition to Meteor and Storm Shadow, the first live firing of MBDA's Brimstone air-to-surface missile, part of the Phase3 Enhancements (P3E) programme, was successfully completed in July 2017.
German aircraft can carry four GBU-48 1000 lb bombs.
An anti-ship capability has been studied but has not yet been contracted. Weapon options for this role could include Boeing Harpoon, MBDA Marte, "Sea Brimstone", and RBS-15.
Air to air
The Typhoon also carries a specially developed variant of the Mauser BK-27 27 mm cannon that was developed originally for the Panavia Tornado. This is a single-barrel, electrically fired, gas-operated revolver cannon with a new linkless feed system which is located in the starboard wing root, and is capable of firing up to 1700 rounds per minute. There was a proposal on cost grounds in 1999 to limit UK gun-armament fit to the first 53 batch-1 aircraft and not used operationally, but this decision was reversed in 2006. The aircraft carries 150 rounds.
In his 2022 book Typhoon, former RAF pilot Mike Sutton reported that his 27 mm cannon had jammed during a strafing run in Syria, against ISIS targets, while supporting Allied ground units. According to his book, the Typhoon was originally intended to be built without an internal gun, like the F-4 Phantom and the Harrier jump jet. A decision to install an internal gun had led to "manufacturing issues". Sutton claimed that, during his staffing run, the gun jammed after 26 rounds, with the HUD showing a "GUN FAIL" warning legend. During the debrief it transpired that the problem was well known to both the pilots and ground crews.
In addition to its air to ground armament; the Typhoon can carry a mixture of air to air weaponry to fulfill its role as an air superiority fighter. This includes the ASRAAM, IRIS-T, and the AIM-9 Sidewinder heat seeking missiles; and the AIM-120 AMRAAM and the MBDA Meteor beyond visual range radar guided missiles. Under Tranche 2, Block 15 EOC (Enhanced Operational Capability) 2; the Meteor was integrated into the Typhoon's arsenal. This similar capability was achieved in the RAF under "Project Centurion"; with 107 Tranche 2 and 3 Typhoons modified to be capable to use the Meteor along with Brimstone and Storm Shadow air to ground missiles.
Operational history
Austrian Air Force (Luftstreitkräfte)
In 2002, Austria selected the Typhoon as its new air defence aircraft, it having beaten the F-16 and the Saab Gripen in competition. The purchase of 18 Typhoons was agreed on 1 July 2003, however this was reduced to 15 in June 2007. The first aircraft (7L-WA) was delivered on 12 July 2007 to Zeltweg Air Base and formally entered service with the Austrian Air Force. A 2008 report by the Austrian Court of Audit calculated, that instead of getting 18 Tranche2 jets at a price of €109million each, as stipulated by the original contract, the revised deal, agreed to by Minister Norbert Darabos, meant that Austria was paying an increased unit price of €114million for 15 partially used, Tranche1 jets. In July 2008, the Luftstreitkräfte assigned the Eurofighter to Quick Reaction Alert (QRA) duties, by the end of the year they had been scrambled 73 times.
Austrian prosecutors are investigating allegations that up to €100million was made available to lobbyists to influence the original purchase decision in favour of the Eurofighter. By October 2013, all Typhoons in service with Austria had been upgraded to the latest Tranche1 standard. In 2014, due to defence budget restrictions, there were only 12 pilots available to fly the 15 aircraft in Austria's Air Force. In February 2017, Austrian defence minister Hans Peter Doskozil accused Airbus of fraudulent intent following a probe that allegedly unveiled corruption linked to the order of Typhoon jets.
In July 2017, the Austria Defence Ministry announced that it would be replacing all its Typhoon aircraft by 2020. The ministry said continued use of its Typhoons over their 30-year life span would cost about €5billion with the bulk being for maintenance. By comparison it is estimated that buying and operating a new fleet of 15 single-seat and three twin-seat fighters would save €2billion over that period. Austria plans to explore a government-to-government sale or lease agreement to avoid a lengthy and costly tender process with a manufacturer. Possible replacements include the Gripen and the F-16.
On 20 July 2020, a letter written by Indonesia's defence minister, Prabowo Subianto, was published by Indonesian news outlets expressing interest in acquiring Austria's entire fleet of Typhoon jets.
German Air Force (Luftwaffe)
On 4 August 2003, the German Air Force accepted its first series production Eurofighter (30+03) starting the replacement process of the Mikoyan MiG-29s inherited from the East German Air Force. The first Luftwaffe Wing to accept the Eurofighter was Jagdgeschwader 73 "Steinhoff" on 30 April 2004 at Rostock–Laage Airport. The second Wing was Jagdgeschwader 74 (JG74) on 25 July 2006, with four Eurofighters arriving at Neuburg Air Base, beginning the replacement of JG74's McDonnell Douglas F-4F Phantom IIs.
The Luftwaffe assigned their Eurofighters to QRA on 3 June 2008, taking over from the F-4F Phantom II.
On 28 October 2014, while deployed to Ämari Air Base in Estonia as part of the NATO Baltic Air Policing mission, German Eurofighters scrambled and intercepted seven Russian Air Force aircraft over the Baltic Sea.
The Luftwaffe once again provided Baltic Air Policing at Ämari Air Base between 31 August 2020 and April 2021, having taken over from Dassault Mirage 2000-5Fs of the French Air and Space Force.
On 5 June 2024, the German chancellor announced plans to purchase another twenty Eurofighters.
German Eurofighters took part in Exercise Tarang Shakti held by the Indian Air Force from 6 August 2024.
Italian Air Force (Aeronautica Militare)
On 16 December 2005, the F-2000 Typhoon reached initial operational capability (IOC) with the Italian Air Force (Aeronautica Militare). Its F-2000 Typhoons were put into service as air defence fighters at the Grosseto Air Base, and immediately assigned to QRA at the same base.
On 17 July 2009, Italian Air Force F-2000A Typhoons were deployed to protect Albania's airspace. On 29 March 2011, Italian Air Force Eurofighter Typhoons began flying combat air patrol missions in support of NATO's Operation Unified Protector in Libya.
Between January and August 2015, four Aeronautica Militare F-2000A Typhoons (from 36º and 37º Stormo) were deployed to Šiauliai Air Base in northern Lithuania as part of the Baltic Air Policing mission.
Kuwait Air Force
On 11 September 2015, Eurofighter confirmed that an agreement had been reached to supply Kuwait with 28 aircraft. On 1 March 2016, the Kuwaiti National Assembly approved the procurement of 22 single-seat and six twin-seat Typhoons. On 5 April 2016, Kuwait signed a contract with Leonardo valued at €7.957billion ($9.062billion) for the supply of the 28 aircraft, all to tranche 3 standard. The Kuwaiti aircraft will be the first Typhoons to receive the Captor-E AESA radar, with two instrumented production aircraft from the UK and Germany currently undergoing ground-based integration trials. The Typhoons will be fitted with Leonardo's Praetorian defensive aids suite and PIRATE infrared search and track system. The contract involves the production of aircraft in Italy and covers logistics, operational support and the training of flight crews and ground personnel. It also encompasses infrastructure work at the Ali Al Salem Air Base, where the Typhoons will be based. Aircraft deliveries will begin in 2020.
Qatar Emiri Air Force
From January 2011 the Qatar Emiri Air Force (QEAF) evaluated the Typhoon, alongside the Boeing F/A-18E/F Super Hornet, the McDonnell Douglas F-15E Strike Eagle, the Dassault Rafale, and the Lockheed Martin F-35 Lightning II, to replace its then inventory of Dassault Mirage 2000-5s. On 30 April 2015 Qatar announced that it would order 24 Rafales.
In December 2017 a deal for Qatar to buy 24 jets and a support and training package from BAE was announced, scheduled to begin in 2022. In September 2018, Qatar made the first payment for the procurement of 24 Eurofighter Typhoons and nine BAE Systems Hawk aircraft to BAE.
Royal Air Force (UK)
The UK's first Typhoon Development Aircraft (DA-2) ZH588 made its maiden flight on 6 April 1994 from Warton. On 1 September 2002, No. XVII (Reserve) Squadron was reformed at Warton as the Typhoon Operational Evaluation Unit (TOEU), receiving its first aircraft on 18 December 2003. The first RAF production aircraft to take to the air was ZJ800 (BT001) on 14 February 2003, completing a 21-minute flight. The next Typhoon squadron to be formed was No. 29 (R) Squadron which formed as the Typhoon Operational Conversion Unit (OCU). The first operational RAF Typhoon squadron to be formed was No.3 (Fighter) Squadron on 31 March 2006, when it moved to RAF Coningsby.
No. 3 (F) Squadron Typhoon F2s took over QRA responsibilities from the Panavia Tornado F3 on 29 June 2007, initially alternating with the Tornado F3 every month. On 9 August 2007, the UK's MoD reported that No. XI (F) Squadron of the RAF, which stood up as a Typhoon squadron on 29 March 2007, had taken delivery of its first two multi-role Typhoons. Two of No. XI (F) Squadron's Typhoons were sent to intercept a Russian Tupolev Tu-95 approaching British airspace on 17 August 2007. The RAF Typhoons were declared combat ready in the air-to-ground role by 1 July 2008. The RAF Typhoons were projected to be ready to deploy for operations by mid-2008.
In late 2009, four RAF Typhoons were deployed to RAF Mount Pleasant, replacing the Tornado F3s of No. 1435 Flight defending the Falkland Islands. No.6 Squadron stood up at RAF Leuchars on 6 September 2010, making Leuchars the second RAF base to operate the Typhoon.
On 20 March 2011 ten Typhoons from RAF Coningsby and RAF Leuchars arrived at the Gioia del Colle airbase in southern Italy to enforce a no-fly zone in Libya alongside Panavia Tornado GR4s. On 21 March, RAF Typhoons flew their first-ever combat mission while patrolling the no-fly zone. On 29 March, it was revealed that the RAF was having to divert personnel from Typhoon training to meet the shortfall in pilots available to fly the required number of sorties over Libya. On 12 April 2011, a RAF Typhoon and a Tornado GR4 dropped precision-guided bombs on ground vehicles operated by Gaddafi forces. The RAF said that each aircraft dropped one GBU-16 Paveway II 454 kg (1,000 lb) laser-guided bomb which struck "very successfully and very accurately [and this] represented] a significant milestone in the delivery of multi-role Typhoon." Target designation was provided by the Tornados with their Litening III targeting pods due to the lack of Typhoon pilots trained in air-to-ground missions.
The National Audit Office observed in 2011 that the distribution of the Eurofighter's parts supply and repairs over several countries has led to parts shortages, long timescales for repairs, and the cannibalisation of some aircraft to keep others flying. The UK's then Defence Secretary Liam Fox admitted on 14 April 2011 that Britain's Eurofighter Typhoon jets were grounded in 2010 due to shortage of spare parts. The RAF "cannibalised" aircraft for spare parts in a bid to keep the maximum number of Typhoons operational on any given day. The MoD warned that the problems were likely to continue until 2015.
On 15 September 2012, No.1 (F) Squadron stood up at RAF Leuchars, joining No.6 Squadron as the second Typhoon unit to operate in Scotland. On 22 April 2013, No. 41 (R) Test and Evaluation Squadron (TES) began operating the Typhoon from RAF Coningbsy.
By July 2014, a dozen RAF Tranche 2 Typhoons had been upgraded with Phase1 Enhancement (P1E) capability to enable them to use the Paveway IV guided bomb; the Tranche1 version had used the GBU-12 Paveway II in combat over Libya, but the Paveway IV can be set to explode above or beneath a target and to hit at a set angle.
No. II (AC) Squadron became the fifth RAF Typhoon squadron on 12 January 2015 at RAF Lossiemouth. In July 2015, it was reported that Typhoons from No. II (AC) Squadron were training with Type 45 destroyers in an Air-Maritime Integration (AMI) role, conceding that the service had recently neglected the role following the decommissioning of the Nimrod Maritime Patrol aircraft. In the 2015 Strategic Defence and Security Review (SDSR), the UK decided to retain some of the Tranche1 aircraft to increase the number of front-line squadrons from five to seven and to extend the out-of-service date from 2030 to 2040 as well as implementing the Captor-E AESA radar in later tranches. In 2015, Typhoons were deployed to Malta as security for the Commonwealth Heads of Government Meeting.
On 3 December 2015, six Typhoon FGR4s deployed to RAF Akrotiri to support operations against ISIL. The following evening the Typhoons, accompanied by Tornados, attacked targets in Syria.
In October 2016, four Typhoon FGR4s from No. II (AC) Squadron, supported by an Airbus Voyager KC3 aerial tanker and a Boeing C-17 Globemaster III, deployed to Misawa Air Base in Japan for the first bilateral exercises with non-US forces hosted by the JASDF.
On 14 December 2017, it was announced No. 12 (B) Squadron would stand as a joint RAF/Qatari Air Force squadron, with the Qatari crew temporarily operating Typhoons to prepare them for their own Typhoon deliveries in 2022. On 29 January 2018, the RAF announced that 16 twin-seat Typhoons would undergo the Return to Produce (RTP) process in an effort to save £800million, with each airframe producing £50M of spare parts. This move also reflected the switch from two-seat trainer to single-seat pilot training and greater use of training simulators. In addition, the two-seat airframes were primarily from Tranche1 and could not be equipped with Tranche3 and later upgrades such as Captor-E.
On 1 April 2019, No. IX (B) Squadron officially converted from the Tornado GR4 to the Typhoon FGR4, becoming an aggressor and air defence squadron at Lossiemouth. In April, four Typhoons of No. XI (F) Squadron deployed from RAF Coningsby to Ämari Airbase, Estonia, to undergo a four month long NATO Baltic policing mission (Op AZOTIZE). Five Typhoons of No.6 Squadron participated in the Arctic Challenge Exercise (ACE) in Sweden from 22 May to 4 June. No. 12 Squadron were assigned their first Typhoon FGR4 in July 2019. The 160th, and last, Typhoon (ZK437) was delivered to the RAF on 27 September 2019.
Between November and December 2019, No. 1(F) Squadron deployed to Keflavik Airbase in Iceland as part of NATO's Icelandic Air Policing Mission. During this one-month deployment the aircraft conducted more than 180 practice intercepts and 59 training sorties.
Between April and September 2020, No. 6 Squadron deployed to Šiauliai Air Base, Lithuania, as part of Operation Azotize. While deployed the squadron participated in Exercise BALTOPs 2020. In July 2020, No. 12 Squadron began operating as a joint RAF-QEAF unit at RAF Coningsby.
On 22 March 2021 the 2021 Defence Command Paper announced the retirement of all Tranche 1 Typhoons by 2025, with the remaining fleet being upgraded. Also in 2021 the UK launched the P3Ec package, due for delivery in 2024, including several upgrades, including replacing the multifunction displays with a Large Area Display (LAD). On 14 December 2021 the RAF executed its first operational air-to-air engagement with a Typhoon, shooting down a small hostile drone with an ASRAAM near the Al-Tanf coalition base in Syria.
On 7 September 2022 during the joint UK/US SinkEx 'Atlantic Thunder' a 41 Squadron Typhoon successfully hit the ex-USS Boone with Paveway IVs, becoming the first RAF Typhoon to strike a naval target with live ordnance.
Between 18 and 22 September 2023, Typhoons from 41 Squadron took part in the Finnish led Exercise ‘Baana 23’. During this exercise, the aircraft performed landings and takeoffs from a highway in Tervo, marking a first for any Eurofighter operator.
On 12 January 2024, at 2:30 am local time, four RAF Typhoons dropped Paveway IV bombs on two military facilities, used by Houthis to launch drone and missile strikes on ships in the Red Sea, as a part of the 2024 Yemeni airstrike.
On 13 April 2024, RAF Typhoons shot down an unspecified number of unmanned aerial vehicles during the 2024 Iranian strikes in Israel. The Typhoons, based in Cyprus and Romania, were operating in Iraqi and Syrian airspace as part of Operation Shader.
Royal Air Force of Oman
During the 2008 Farnborough Airshow it was announced that Oman was in an "advanced stage" of discussions to order Typhoons as a replacement for its SEPECAT Jaguar aircraft. On 21 December 2012, the Royal Air Force of Oman (RAFO) became the Typhoon's seventh customer when BAE and Oman announced an order for 12 Typhoons to enter service in 2017. The first of the Typhoons (plus Hawk Mk 166) ordered by Oman were "formally presented to the customer" on 15 May 2017. This included a flypast by a RAFO Typhoon.
Royal Saudi Air Force
In August 2006, Saudi Arabia confirmed it had agreed to purchase 72 Typhoons for the Royal Saudi Air Force (RSAF). In December 2006, it was reported in The Guardian that Saudi Arabia had threatened to buy Rafales because of a UK Serious Fraud Office (SFO) investigation into the Al Yamamah defence deals which commenced in the 1980s.
On 14 December 2006, Britain's attorney general, Lord Goldsmith, ordered that the SFO discontinue its investigation into BAE Systems' alleged bribery of senior Saudi officials in the Al-Yamamah contracts, citing "the need to safeguard national and international security". The Times raised the possibility that RAF production aircraft would be diverted as early Saudi Arabian aircraft, with the RAF forced to wait for its full complement of aircraft. This arrangement would mirror the diversion of RAF Tornados to the RSAF. The Times also reported that such an arrangement would make the UK purchase of its Tranche3 commitments more likely. On 17 September 2007, Saudi Arabia confirmed it had signed a £4.43 billion contract for 72 aircraft. 24 aircraft would be at the Tranche2 build standard, previously destined for the UK RAF, the first being delivered in 2008. The remaining 48 aircraft were to be assembled in Saudi Arabia and delivered from 2011, however following contract renegotiations in 2011, it was agreed that all 72 aircraft would be assembled by BAE Systems in the UK, with the last 24 aircraft being built to Tranche3 capability.
On 29 September 2008, the United States Department of State approved the Typhoon sale, required because of a certain technology governed by the International Traffic in Arms Regulations (ITAR) process which was incorporated into the MIDS of the Eurofighter.
On 22 October 2008, the first RSAF Typhoon made its maiden flight at Warton. Since 2010, BAE has been training Saudi Arabian personnel at Warton.
By 2011, 24 Tranche 2 Eurofighter Typhoons had been delivered to Saudi Arabia, consisting of 18 single-seat and six two-seat aircraft. After that, BAE and Riyadh entered into discussions over configurations and price of the rest of the 72-plane order. On 19 February 2014, BAE announced that the Saudis had agreed to a price increase. BAE announced that the last of the original 72 Typhoons had been delivered to Saudi Arabia in June 2017.
RSAF Typhoons are playing a central role in the Saudi-led bombing campaign in Yemen. In February 2015, Saudi Typhoons attacked ISIS targets over Syria using Paveway IV bombs for the first time.
On 9 March 2018, a memorandum of intent for the additional 48 Typhoons was signed during Saudi Crown Prince Mohammed bin Salman's visit to the United Kingdom, however the deal has not been completed due to German arms sanctions implemented in November 2018 in response to the assassination of Jamal Khashoggi.
Spanish Air and Space Force
The first Spanish production Eurofighter Tifón to fly was CE.16-01 (ST001) on 17 February 2003, flying from Getafe Air Base. The Spanish Air and Space Force assigned their Typhoons to QRA responsibilities in July 2008.
On 7 August 2018, a Spanish Air and Space Force Typhoon, on a training exercise near Otepää in Estonia, released an AMRAAM missile by mistake. There were no casualties, but the ten-day search operation for missile remains was unsuccessful and the status of the missile is unknown, whether it self-destructed in the air or landed unexploded and left a hazardous situation for the public. The pilot was disciplined for negligence, but received only the minimum penalty in the light of undisclosed mitigating circumstances.
Sales and marketing
Germany
Germany placed an order for an additional 38 Tranche 4 Typhoons on 11 November 2020 under the Quadriga Agreement. The aircraft are due to replace Tranche 1 aircraft currently in service, with the first airframe being announced as in production in November 2022. Deliveries are due to take place from 2025.
In March 2022, the German government announced the decision to purchase Typhoon EK over the EA-18G Growler to replace the ageing Tornado ECR from 2030. On 30 November 2023, the Bundestag budget committee formally announced the plans to convert 15 Typhoons to Electronic Warfare standard.
On 5 June 2024, it was announced that an additional 20 Typhoons would be ordered on top of the 38 already on order.
Italy
On 23 December 2024, an order worth €7.5 billion was placed for 24 aircraft.
Spain
The Spanish Air and Space Force has a requirement for a further 45 Typhoons split across two contracts.
Halcon I was signed in June 2022 for the purchase of 20 aircraft will begin deliveries from 2026. The contract is for 16 single-seat and four twin-seat airframes, all at Tranche 4 standard. These aircraft are expected to replace the EF-18 Hornets of Ala 46, based at Gando Air Base on the Canary Islands.
Halcon II followed on 12 September 2023 for the acquirement of a further 25 Typhoons. These aircraft will replace the rest of the EF-18 Hornet fleet which is due to be decommissioned in 2030. The Spanish Government announced that these aircraft would be of Tranche 5 configuration.
Saudi Arabia
In October 2016, it was reported that BAE Systems was in talks with Saudi Arabia over an order for another 48 aircraft. On 9 March 2018, a memorandum of intent for the additional 48 Typhoons was signed during Saudi Crown Prince Mohammed bin Salman's visit to the United Kingdom.
In January 2024, the German government announced that it would no longer block the sale of 48 Typhoons to Saudi Arabia. As of February 2024, there has been no official confirmation that the sale will go ahead as other aircraft have been considered to strengthen the Royal Saudi Air Force's combat fleet.
Egypt
In January 2023, reports surfaced that Egypt would acquire 24 Typhoons as part of a wider $10–12 Billion arms package from Italy.
Turkey
Turkey has also expressed interest, amid US hesitance on delivering the latest block F-16s, and has started negotiations with the UK. Defense Minister Yaşar Güler has underscored Turkey's continued interest in acquiring Typhoons, asserting that they remain a compelling alternative, despite recent disagreements with Germany over the potential purchase. "If we can realize the issues we talked about with our friends, maybe we won't need it, but we do now. The Eurofighter is a very good alternative, and we want to buy it," Güler said in a televised interview with private broadcaster NTV on 11 December 2023. Turkey expects the United States to approve a proposed sale of new F-16 jets and modernization kits in return for Ankara finally green-lighting Sweden's admission into NATO. It was revealed in November that Turkey was in talks with the United Kingdom and Spain over procuring 40 Typhoons. Any sale would require Germany's approval, which is not forthcoming. President Erdoğan has been in Germany since the negotiations were revealed, but has not raised the issue with German Chancellor Olaf Scholz.
Others
Other countries have expressed interest in the fighter, including Serbia, Bangladesh, Colombia, and Ukraine.
The following countries have formally eliminated the Typhoon from their respective fighter programs: Belgium, Denmark, Singapore, South Korea, Switzerland, and Finland.
Variants
The Eurofighter is produced in single-seat and twin-seat variants. The twin-seat variant is not used operationally, but only for training, though it is combat capable. The aircraft has been manufactured in three major standards; seven Development Aircraft (DA), seven production standard Instrumented Production Aircraft (IPA) for further system development, and a continuing number of Series Production Aircraft. The production aircraft are now operational with the partner nation's air forces.
The Tranche 1 aircraft were produced from 2000 onwards. Aircraft capabilities are being increased incrementally, with each software upgrade resulting in a different standard, known as blocks. With the introduction of the block5 standard, the R2 retrofit programme began to bring all Tranche1 aircraft to that standard.
Operators
Summary
Current operators
Austrian Air Force – 15 delivered and 3 more ordered as of October 2022.
Zeltweg Air Base
Überwachungsgeschwader
German Air Force – 143 ordered, and all delivered. As of 30 November 2023, 141 are in service. 38 Tranche 4 aircraft on order under Project Quadriga. 15 Aircraft to be upgraded to Typhoon EW (Electronic Warfare) standard.
Nörvenich Air Base
Taktisches Luftwaffengeschwader 31 "Boelcke", 311 & 312 Staffel at
Wittmundhafen Air Base
Taktisches Luftwaffengeschwader 71 "Richthofen", 711 Staffel
Laage Air Base
Taktisches Luftwaffengeschwader 73 "Steinhoff", 731 & 732 Staffel. (OCU formation)
Neuburg Air Base
Taktisches Luftwaffengeschwader 74, 741 & 742 Staffel
Ingolstadt Manching Airport
Wehrtechnische Dienststelle 61
Italian Air Force – 96 ordered with 96 delivered and 93 in operation as of August 2024. An additional 24 aircraft were ordered on 23 December 2024 for €7.5 billion.
Grosseto Air Base, 4º Stormo "Amedeo d'Aosta" (4th Wing)
9° Gruppo Caccia (9th Fighter Squadron)
20° Gruppo OCU Caccia (20th Fighter Operational Conversion Squadron)
Gioia del Colle Air Base, 36° Stormo "Riccardo Hellmuth Seidl" (36th Wing)
10° Gruppo Caccia (10th Fighter Squadron)
12° Gruppo Caccia (12th Fighter Squadron)
Trapani Air Base, 37° Stormo "Cesare Toschi" (37th Wing)
18° Gruppo Caccia (18th Fighter Squadron)
Istrana Air Base, 51° Stormo "Ferruccio Serafini" (51st Wing)
132° Gruppo Caccia (132nd Fighter Squadron)
Pratica di Mare Air Base, Reparto Sperimentale Volo
Kuwait Air Force – 28 ordered with 13 delivered as of 31 October 2023.
Ali Al Salem AB, Al Jahra District
7 Squadron
18 Squadron
Royal Air Force of Oman – 12 ordered in December 2012 with all delivered by June 2018.
RAFO Adam, Ad Dakhiliyah
No.8 Squadron
Qatar Emiri Air Force – 24 ordered, 10 delivered as of March 2023.
Tamim Airbase, Dukhan
7 Squadron
12 Squadron
RAF Coningsby, Lincolnshire, United Kingdom (from July 2020)
No. 12 Squadron RAF, joint RAF/Qatar Emiri Air Force squadron
Royal Saudi Air Force – 71 aircraft in operation as of June 2018 from 72 delivered.
King Fahad Air Base, Taif
No. 3 Squadron
No. 10 Squadron
No. 80 Squadron
Spanish Air and Space Force – 73 ordered, all of which have been delivered by October 2020 with 70 in operation as of October 2020. A further 45 Aircraft are on order as of 13 September 2023. On 20 December 2024, the Spanish government has signed a contract with Munich-based, Germany, NATO Eurofighter and Tornado Management Agency (NETMA) for the acquisition of additional 25 Eurofighter aircraft Known as the Halcon II programme.
Seville-Morón Air Base, Ala 11
111 Escuadrón
113 Escuadrón, OCU Tactical pilot training and evaluation
Albacete-Los Llanos Air Base, Ala 14
142 Escuadrón
Past Units
Armament and Experimentation Logistics Center
Royal Air Force – 160 ordered, all of which had been delivered by September 2019. As of 21 August 2023, the RAF has 137 aircraft, with 102 in service.
RAF Coningsby, Lincolnshire, England
No. 3 (F) Squadron
No. XI (F) Squadron
No. 12 Squadron, joint RAF/Qatar Air Force squadron
No. 29 Squadron, OCU Tactical pilot training and evaluation
No. 41 Test and Evaluation Squadron
RAF Lossiemouth, Moray, Scotland
No. 1 (F) Squadron
No. II (AC) Squadron
No. 6 Squadron
No. IX (B) Squadron
RAF Mount Pleasant, East Falkland, Falkland Islands
No. 1435 Flight
Past Units
No. 17 (R) Test & Evaluation Squadron, Operational Evaluation Unit (Operated between 2003 and 2013)
Accidents
On 21 November 2002, the Spanish twin-seat Typhoon prototype DA-6 crashed due to a double engine flameout caused by surges of the two engines at 45,000 ft. The two crew members escaped unhurt and the aircraft crashed in a military test range near Toledo, some from its base at Getafe Air Base.
On 23 April 2008, a RAF Typhoon FGR4 from 17 Squadron at RAF Coningsby (ZJ943), made a wheels–up landing at the US Navy's NAWS China Lake, in the United States. The aircraft was severely damaged however the pilot from 17 Squadron did not sustain any significant injury. It is thought the pilot may have forgotten to deploy the undercarriage or that for some reason he was not alerted to the undercarriage having not been deployed.
On 24 August 2010, a Spanish twin-seat Typhoon crashed at Spain's Morón Air Base moments after take-off for a routine training flight. It was being piloted by a RSAF pilot, who was killed, and a Spanish Air Force Major, who ejected safely. In September 2010 the German Air Force grounded its 55 planes and the RAF temporarily grounded all Typhoon training flights amidst concerns that after ejecting successfully the pilot had fallen to his death. On 21 September, the RAF announced that the harness system had been sufficiently modified to enable routine flying from RAF Coningsby. The Austrian Air Force also said all its aircraft had been cleared for flight. On 24 August 2010, the ejection seat manufacturer Martin Baker commented: "...under certain conditions, the quick release fitting could be unlocked using the palm of the hand, rather than the thumb and fingers, and that this posed a risk of inadvertent release", adding that a modification had been rapidly developed and approved "to eliminate this risk" and was being fitted to all Typhoon seats.
On 9 June 2014, the Spanish Air Force announced that a Typhoon had crashed at Spain's Morón Air Base on landing after a routine training flight. The sole pilot, Captain Fernando Lluna Carrascosa of the Spanish Air Force, who had over 600 Eurofighter flying hours, died in the crash.
On 23 June 2014, a Typhoon of the German Air Force suffered a mid-air collision with a Learjet 35A, which crashed near Olsberg, Germany. The severely damaged Eurofighter made a safe landing at Nörvenich Air Base, while the Learjet crashed with the two onboard killed.
On 1 September 2017, a RAF Typhoon overran the runway on landing at Pardubice Airport, Czech Republic, after diverting for bad weather.
On 14 September 2017, a RSAF aircraft crashed on a combat mission in Yemen's Abhyan province, killing its pilot. According to the Saudi Government, the aircraft crashed due to technical reasons.
On 24 September 2017, an Italian Air Force aircraft crashed during an airshow in Terracina, Lazio, Italy. The pilot did not eject and died in the accident. The Italian Air Force said the jet completed a loop but then failed to get enough lift as it approached sea level and hit the water just a few hundred metres offshore.
On 12 October 2017, a Spanish Air Force Typhoon crashed near its base at Los Llanos Albacete, Spain, when returning from the military parade for the Spanish National Day. The pilot was killed.
On 24 June 2019, two German Air Force aircraft collided mid-air during an exercise in the region of Müritz in Mecklenburg-Vorpommern in northern Germany. Both aircraft were lost while the pilots ejected. The two planes were based at Laage, home to the "Steinhoff" Tactical Air Force Wing 73. Neither plane was carrying weapons. One of the pilots died.
On 14 December 2022, an Italian Air Force Typhoon of 37° Stormo crashed during the landing sequence into Trapani-Birgi Air Base in Sicily. The aircraft had been conducting a training mission with another Typhoon which landed safely. The pilot was killed during the crash.
On 24 July 2024, an Italian Air Force Typhoon crashed during a military training exercise in the Douglas Daly region of the Northern Territory, in outback Australia, during Exercise Pitch Black. The pilot ejected safely and was taken to Royal Darwin Hospital by helicopter.
Aircraft on display
Germany98+29 EF2000 Prototype DA-1 on display at the Deutsches Museum Flugwerft Schleissheim, Munich.
ItalyMMX603 EF2000 Prototype DA-7 on display at Cameri Air Base, Cameri.
United KingdomZH588 EF2000 Prototype DA-2 on display at the Royal Air Force Museum London, Hendon, England.Ajay, Srivastava. "New Display at Royal Air Force Museum". Flight Journal, Volume 13, Issue 3, June 2008.ZH590 EF2000(T) Prototype DA-4 was on display at the Imperial War Museum Duxford, Cambridge, England, in Hangar 3: Air and Sea, and was due to be transferred to the Newark Air Museum in 2020. It now resides at RAF Cosford, however, after the MOD made the decision to use it as an Instructional Airframe.
Specifications
| Technology | Specific aircraft | null |
167718 | https://en.wikipedia.org/wiki/Porcelain | Porcelain | Porcelain, also called china () is a ceramic material made by heating raw materials, generally including kaolinite, in a kiln to temperatures between . The greater strength and translucence of porcelain, relative to other types of pottery, arise mainly from vitrification and the formation of the mineral mullite within the body at these high temperatures. End applications include tableware, decorative ware such as figurines, and products in technology and industry such as electrical insulators and laboratory ware.
The manufacturing process used for porcelain is similar to that used for earthenware and stoneware, the two other main types of pottery, although it can be more challenging to produce. It has usually been regarded as the most prestigious type of pottery due to its delicacy, strength, and high degree of whiteness. It is frequently both glazed and decorated.
Though definitions vary, porcelain can be divided into three main categories: hard-paste, soft-paste, and bone china. The categories differ in the composition of the body and the firing conditions.
Porcelain slowly evolved in China and was finally achieved (depending on the definition used) at some point about 2,000 to 1,200 years ago. It slowly spread to other East Asian countries, then to Europe, and eventually to the rest of the world. The European name, porcelain in English, comes from the old Italian porcellana (cowrie shell) because of its resemblance to the surface of the shell. Porcelain is also referred to as china or fine china in some English-speaking countries, as it was first seen in imports from China during the 17th century. Properties associated with porcelain include low permeability and elasticity; considerable strength, hardness, whiteness, translucency, and resonance; and a high resistance to corrosive chemicals and thermal shock.
Porcelain has been described as being "completely vitrified, hard, impermeable (even before glazing), white or artificially coloured, translucent (except when of considerable thickness), and resonant". However, the term "porcelain" lacks a universal definition and has "been applied in an unsystematic fashion to substances of diverse kinds that have only certain surface-qualities in common".
Traditionally, East Asia only classifies pottery into low-fired wares (earthenware) and high-fired wares (often translated as porcelain), the latter also including what Europeans call "stoneware", which is high-fired but not generally white or translucent. Terms such as "proto-porcelain", "porcellaneous", or "near-porcelain" may be used in cases where the ceramic body approaches whiteness and translucency.
In 2021, the global market for porcelain tableware was estimated to be worth US$22.1 billion.
Types
Hard paste
Hard-paste porcelain was invented in China, and it was also used in Japanese porcelain. Most of the finest quality porcelain wares are made of this material. The earliest European porcelains were produced at the Meissen factory in the early 18th century; they were formed from a paste composed of kaolin and alabaster and fired at temperatures up to in a wood-fired kiln, producing a porcelain of great hardness, translucency, and strength. Later, the composition of the Meissen hard paste was changed, and the alabaster was replaced by feldspar and quartz, allowing the pieces to be fired at lower temperatures. Kaolinite, feldspar, and quartz (or other forms of silica) continue to constitute the basic ingredients for most continental European hard-paste porcelains.
Soft paste
Soft-paste porcelains date back to early attempts by European potters to replicate Chinese porcelain by using mixtures of clay and frit. Soapstone and lime are known to have been included in these compositions. These wares were not yet actual porcelain wares, as they were neither hard nor vitrified by firing kaolin clay at high temperatures. As these early formulations suffered from high pyroplastic deformation, or slumping in the kiln at high temperatures, they were uneconomic to produce and of low quality.
Formulations were later developed based on kaolin with quartz, feldspars, nepheline syenite, or other feldspathic rocks. These are technically superior and continue to be produced. Soft-paste porcelains are fired at lower temperatures than hard-paste porcelains; therefore, these wares are generally less hard than hard-paste porcelains.
Bone china
Although originally developed in England in 1748 to compete with imported porcelain, bone china is now made worldwide, including in China. The English had read the letters of Jesuit missionary François Xavier d'Entrecolles, which described Chinese porcelain manufacturing secrets in detail. One writer has speculated that a misunderstanding of the text could possibly have been responsible for the first attempts to use bone-ash as an ingredient in English porcelain, although this is not supported by modern researchers and historians.
Traditionally, English bone china was made from two parts of bone ash, one part of kaolin, and one part of china stone, although the latter has been replaced by feldspars from non-UK sources.
Materials
Kaolin is the primary material from which porcelain is made, even though clay minerals might account for only a small proportion of the whole. The word paste is an old term for both unfired and fired materials. A more common terminology for the unfired material is "body"; for example, when buying materials a potter might order an amount of porcelain body from a vendor.
The composition of porcelain is highly variable, but the clay mineral kaolinite is often a raw material. Other raw materials can include feldspar, ball clay, glass, bone ash, steatite, quartz, petuntse and alabaster.
The clays used are often described as being long or short, depending on their plasticity. Long clays are cohesive (sticky) and have high plasticity; short clays are less cohesive and have lower plasticity. In soil mechanics, plasticity is determined by measuring the increase in content of water required to change a clay from a solid state bordering on the plastic, to a plastic state bordering on the liquid, though the term is also used less formally to describe the ease with which a clay may be worked.
Clays used for porcelain are generally of lower plasticity than many other pottery clays. They wet very quickly, meaning that small changes in the content of water can produce large changes in workability. Thus, the range of water content within which these clays can be worked is very narrow and consequently must be carefully controlled.
Production
Forming
Porcelain can be made using all the shaping techniques for pottery.
Glazing
Biscuit porcelain is unglazed porcelain treated as a finished product, mostly for figures and sculpture. Unlike their lower-fired counterparts, porcelain wares do not need glazing to render them impermeable to liquids and for the most part are glazed for decorative purposes and to make them resistant to dirt and staining. Many types of glaze, such as the iron-containing glaze used on the celadon wares of Longquan, were designed specifically for their striking effects on porcelain.
Decoration
Porcelain often receives underglaze decoration using pigments that include cobalt oxide and copper, or overglaze enamels, allowing a wider range of colours. Like many earlier wares, modern porcelains are often biscuit-fired at around , coated with glaze and then sent for a second glaze-firing at a temperature of about or greater. Another early method is "once-fired", where the glaze is applied to the unfired body and the two fired together in a single operation.
Firing
In this process, "green" (unfired) ceramic wares are heated to high temperatures in a kiln to permanently set their shapes, vitrify the body and the glaze. Porcelain is fired at a higher temperature than earthenware so that the body can vitrify and become non-porous. Many types of porcelain in the past have been fired twice or even three times, to allow decoration using less robust pigments in overglaze enamel.
History
Chinese porcelain
Porcelain was invented in China over a centuries-long development period beginning with "proto-porcelain" wares dating from the Shang dynasty (1600–1046 BCE). By the time of the Eastern Han dynasty (25–220 CE) these early glazed ceramic wares had developed into porcelain, which Chinese defined as high-fired ware. By the late Sui dynasty (581–618 CE) and early Tang dynasty (618–907 CE), the now-standard requirements of whiteness and translucency had been achieved, in types such as Ding ware. The wares were already exported to the Islamic world, where they were highly prized.
Eventually, porcelain and the expertise required to create it began to spread into other areas of East Asia. During the Song dynasty (960–1279 CE), artistry and production had reached new heights. The manufacture of porcelain became highly organised, and the dragon kilns excavated from this period could fire as many as 25,000 pieces at a time, and over 100,000 by the end of the period. While Xing ware is regarded as among the greatest of the Tang dynasty porcelain, Ding ware became the premier porcelain of the Song dynasty. By the Ming dynasty, production of the finest wares for the court was concentrated in a single city, and Jingdezhen porcelain, originally owned by the imperial government, remains the centre of Chinese porcelain production.
By the time of the Ming dynasty (1368–1644 CE), porcelain wares were being exported to Asia and Europe. Some of the most well-known Chinese porcelain art styles arrived in Europe during this era, such as the coveted "blue-and-white" wares. The Ming dynasty controlled much of the porcelain trade, which was expanded to Asia, Africa and Europe via the Silk Road. In 1517, Portuguese merchants began direct trade by sea with the Ming dynasty, and in 1598, Dutch merchants followed.
Some porcelains were more highly valued than others in imperial China. The most valued types can be identified by their association with the court, either as tribute offerings, or as products of kilns under imperial supervision. Since the Yuan dynasty, the largest and best centre of production has made Jingdezhen porcelain. During the Ming dynasty, Jingdezhen porcelain had become a source of imperial pride. The Yongle emperor erected a white porcelain brick-faced pagoda at Nanjing, and an exceptionally smoothly glazed type of white porcelain is peculiar to his reign. Jingdezhen porcelain's fame came to a peak during the Qing dynasty.
Japanese porcelain
Although the Japanese elite were keen importers of Chinese porcelain from early on, they were not able to make their own until the arrival of Korean potters that were taken captive during the Japanese invasions of Korea (1592–1598). They brought an improved type of kiln, and one of them spotted a source of porcelain clay near Arita, and before long several kilns had started in the region. At first their wares were similar to the cheaper and cruder Chinese porcelains with underglaze blue decoration that were already widely sold in Japan; this style was to continue for cheaper everyday wares until the 20th century.
Exports to Europe began around 1660, through the Chinese and the Dutch East India Company, the only Europeans allowed a trading presence. Chinese exports had been seriously disrupted by civil wars as the Ming dynasty fell apart, and the Japanese exports increased rapidly to fill the gap. At first the wares used European shapes and mostly Chinese decoration, as the Chinese had done, but gradually original Japanese styles developed.
Nabeshima ware was produced in kilns owned by the families of feudal lords, and were decorated in the Japanese tradition, much of it related to textile design. This was not initially exported, but used for gifts to other aristocratic families. Imari ware and Kakiemon are broad terms for styles of export porcelain with overglaze "enamelled" decoration begun in the early period, both with many sub-types.
A great range of styles and manufacturing centres were in use by the start of the 19th century, and as Japan opened to trade in the second half, exports expanded hugely and quality generally declined. Much traditional porcelain continues to replicate older methods of production and styles, and there are several modern industrial manufacturers. By the early 1900s, Filipino porcelain artisans working in Japanese porcelain centres for much of their lives, later on introduced the craft into the native population in the Philippines, although oral literature from Cebu in the central Philippines have noted that porcelain were already being produced by the natives locally during the time of Cebu's early rulers, prior to the arrival of colonizers in the 16th century.
Korean porcelain
Olive green glaze was introduced in the late Silla Dynasty. Most ceramics from Silla are generally leaf-shaped, which is a very common shape in Korea. Korean celadon comes in a variety of colors, from turquoise to putty. Additionally, in the late 13th century, the Inlay technique of expressing pigmented patterns by filling the hollow parts of pottery with white and red clay was frequently used. The main difference from those in China is that many specimens have inlay decoration under the glaze.
Most Korean ceramics from the Joseon Dynasty (1392-1910) are of excellent decorative quality. It usually has a melon shape and is asymmetrical.
European porcelain
Imported Chinese porcelains were held in such great esteem in Europe that in English china became a commonly used synonym for the Italian-derived porcelain. The first mention of porcelain in Europe is in Il Milione by Marco Polo in the 13th century. Apart from copying Chinese porcelain in faience (tin glazed earthenware), the soft-paste Medici porcelain in 16th-century Florence was the first real European attempt to reproduce it, with little success.
Early in the 16th century, Portuguese traders returned home with samples of kaolin, which they discovered in China to be essential in the production of porcelain wares. However, the Chinese techniques and composition used to manufacture porcelain were not yet fully understood. Countless experiments to produce porcelain had unpredictable results and met with failure. In the German state of Saxony, the search concluded in 1708 when Ehrenfried Walther von Tschirnhaus produced a hard, white, translucent type of porcelain specimen with a combination of ingredients, including kaolin and alabaster, mined from a Saxon mine in Colditz. It was a closely guarded trade secret of the Saxon enterprise.
In 1712, many of the elaborate Chinese porcelain manufacturing secrets were revealed throughout Europe by the French Jesuit father Francois Xavier d'Entrecolles and soon published in the Lettres édifiantes et curieuses de Chine par des missionnaires jésuites. The secrets, which d'Entrecolles read about and witnessed in China, were now known and began seeing use in Europe.
Meissen
Von Tschirnhaus along with Johann Friedrich Böttger were employed by Augustus II, King of Poland and Elector of Saxony, who sponsored their work in Dresden and in the town of Meissen. Tschirnhaus had a wide knowledge of science and had been involved in the European quest to perfect porcelain manufacture when, in 1705, Böttger was appointed to assist him in this task. Böttger had originally been trained as a pharmacist; after he turned to alchemical research, he claimed to have known the secret of transmuting dross into gold, which attracted the attention of Augustus. Imprisoned by Augustus as an incentive to hasten his research, Böttger was obliged to work with other alchemists in the futile search for transmutation and was eventually assigned to assist Tschirnhaus. One of the first results of the collaboration between the two was the development of a red stoneware that resembled that of Yixing.
A workshop note records that the first specimen of hard, white and vitrified European porcelain was produced in 1708. At the time, the research was still being supervised by Tschirnhaus; however, he died in October of that year. It was left to Böttger to report to Augustus in March 1709 that he could make porcelain. For this reason, credit for the European discovery of porcelain is traditionally ascribed to him rather than Tschirnhaus.
The Meissen factory was established in 1710 after the development of a kiln and a glaze suitable for use with Böttger's porcelain, which required firing at temperatures of up to to achieve translucence. Meissen porcelain was once-fired, or green-fired. It was noted for its great resistance to thermal shock; a visitor to the factory in Böttger's time reported having seen a white-hot teapot being removed from the kiln and dropped into cold water without damage. Although widely disbelieved this has been replicated in modern times.
Russian porcelain
In 1744, Elizabeth of Russia signed an agreement to establish the first porcelain manufactory; previously it had to be imported. The technology of making "white gold" was carefully hidden by its creators. Peter the Great had tried to reveal the "big porcelain secret", and sent an agent to the Meissen factory, and finally hired a porcelain master from abroad. This relied on the research of the Russian scientist Dmitry Ivanovich Vinogradov. His development of porcelain manufacturing technology was not based on secrets learned through third parties, but was the result of painstaking work and careful analysis. Thanks to this, by 1760, Imperial Porcelain Factory, Saint Petersburg became a major European factories producing tableware, and later porcelain figurines. Eventually other factories opened: Gardner porcelain, Dulyovo (1832), Kuznetsovsky porcelain, Popovsky porcelain, and Gzhel.
During the twentieth century, under Soviet governments, ceramics continued to be a popular artform, supported by the state, with an increasingly propagandist role. One artist, who worked at the Baranovsky Porcelain Factory and at the Experimental Ceramic and Artistic Plant in Kyiv, was Oksana Zhnikrup, whose porcelain figures of the ballet and the circus were widely known.
Soft paste porcelain
The pastes produced by combining clay and powdered glass (frit) were called Frittenporzellan in Germany and frita in Spain. In France they were known as pâte tendre and in England as "soft-paste". They appear to have been given this name because they do not easily retain their shape in the wet state, or because they tend to slump in the kiln under high temperature, or because the body and the glaze can be easily scratched.
France
Experiments at Rouen produced the earliest soft-paste in France, but the first important French soft-paste porcelain was made at the Saint-Cloud factory before 1702. Soft-paste factories were established with the Chantilly manufactory in 1730 and at Mennecy in 1750. The Vincennes porcelain factory was established in 1740, moving to larger premises at Sèvres in 1756. Vincennes soft-paste was whiter and freer of imperfections than any of its French rivals, which put Vincennes/Sèvres porcelain in the leading position in France and throughout the whole of Europe in the second half of the 18th century.
Italy
Doccia porcelain of Florence was founded in 1735 and remains in production, unlike Capodimonte porcelain which was moved from Naples to Madrid by its royal owner, after producing from 1743 to 1759. After a gap of 15 years Naples porcelain was produced from 1771 to 1806, specializing in Neoclassical styles. All these were very successful, with large outputs of high-quality wares. In and around Venice, Francesco Vezzi was producing hard-paste from around 1720 to 1735; survivals of Vezzi porcelain are very rare, but less so than from the Hewelke factory, which only lasted from 1758 to 1763. The soft-paste Cozzi factory fared better, lasting from 1764 to 1812. The Le Nove factory produced from about 1752 to 1773, then was revived from 1781 to 1802.
England
The first soft-paste in England was demonstrated by Thomas Briand to the Royal Society in 1742 and is believed to have been based on the Saint-Cloud formula. In 1749, Thomas Frye took out a patent on a porcelain containing bone ash. This was the first bone china, subsequently perfected by Josiah Spode. William Cookworthy discovered deposits of kaolin in Cornwall, and his factory at Plymouth, established in 1768, used kaolin and china stone to make hard-paste porcelain with a body composition similar to that of the Chinese porcelains of the early 18th century. But the great success of English ceramics in the 18th century was based on soft-paste porcelain, and refined earthenwares such as creamware, which could compete with porcelain, and had devastated the faience industries of France and other continental countries by the end of the century. Most English porcelain from the late 18th century to the present is bone china.
In the twenty-five years after Briand's demonstration, a number of factories were founded in England to make soft-paste tableware and figures:
Chelsea (1743)
Bow (1745)
St James's (1748)
Bristol porcelain (1748)
Longton Hall (1750)
Royal Crown Derby (1750 or 1757)
Royal Worcester (1751)
Lowestoft porcelain (1757)
Wedgwood (1759)
Spode (1767)
Applications other than decorative and tableware
Electric insulators
Porcelain has been used for electrical insulators since at least 1878, with another source reporting earlier use of porcelain insulators on the telegraph line between Frankfurt and Berlin. It is widely used for insulators in electrical power transmission system due to its high stability of electrical, mechanical and thermal properties even in harsh environments.
A body for electrical porcelain typically contains varying proportions of ball clay, kaolin, feldspar, quartz, calcined alumina and calcined bauxite. A variety of secondary materials can also be used, such as binders which burn off during firing. UK manufacturers typically fired the porcelain to a maximum of 1200 °C in an oxidising atmosphere, whereas reduction firing is standard practice at Chinese manufacturers.
In 2018, a porcelain bushing insulator manufactured by NGK in Handa, Aichi Prefecture, Japan was certified as the world's largest ceramic structure by Guinness World Records. It is 11.3 m in height and 1.5 m in diameter.
The global market for high-voltage insulators was estimated to be worth US$4.95 billion in 2015, of which porcelain accounts for just over 48%.
Chemical porcelain
A type of porcelain characterised by low thermal expansion, high mechanical strength and high chemical resistance. Used for laboratory ware, such as reaction vessels, combustion boats, evaporating dishes and Büchner funnels. Raw materials for the body include kaolin, quartz, feldspar, calcined alumina, and possibly also low percentages of other materials. A number of International standards specify the properties of the porcelain, such as ASTM C515.
Tiles
A porcelain tile has been defined as 'a ceramic mosaic tile or paver that is generally made by the dust-pressed method of a composition resulting in a tile that is dense, fine-grained, and smooth with sharply formed face, usually impervious and having colors of the porcelain type which are usually of a clear, luminous type or granular blend thereof.' Manufacturers are found across the world with Italy being the global leader, producing over 380 million square metres in 2006.
Historic examples of rooms decorated entirely in porcelain tiles can be found in several palaces including ones at Galleria Sabauda in Turin, Museo di Doccia in Sesto Fiorentino, Museo di Capodimonte in Naples, the Royal Palace of Madrid and the nearby Royal Palace of Aranjuez. and the Porcelain Tower of Nanjing. More recent examples include the Dakin Building in Brisbane, California and the Gulf Building in Houston, Texas, which when constructed in 1929 had a porcelain logo on its exterior.
Sanitaryware
Because of its durability, inability to rust and impermeability, glazed porcelain has been in use for personal hygiene since at least the third quarter of the 17th century. During this period, porcelain chamber pots were commonly found in higher-class European households, and the term "bourdaloue" was used as the name for the pot.
Whilst modern sanitaryware, such as closets and washbasins, is made of ceramic materials, porcelain is no longer used and vitreous china is the dominant material. Bath tubs are not made of porcelain, but of enamel on a metal base, usually of cast iron. Porcelain enamel is a marketing term used in the US, and is not porcelain but vitreous enamel.
Dental porcelain
Dental porcelain is used for crowns, bridges and veneers. A formulation of dental porcelain is 70-85% feldspar, 12-25% quartz, 3-5% kaolin, up to 15% glass and around 1% colourants.
Manufacturers
The Americas
Brazil
Germer Porcelanas Finas
Porcelana Schmidt
United States
Blue Ridge
CoorsTek, Inc.
Franciscan
Lenox
Lotus Ware
Pickard China
Asia
China
Ding ware
Jingdezhen porcelain
Iran
Maghsoud Group of Factories, (1993–present)
Zarin Iran Porcelain Industries, (1881–present)
Japan
Hirado ware
Kakiemon
Nabeshima ware
Narumi
Noritake
Malaysia
Royal Selangor
South Korea
Haengnam Chinaware
Hankook Chinaware
Sri Lanka
Dankotuwa Porcelain
Noritake Lanka Porcelain
Royal Fernwood Porcelain
Taiwan
Franz Collection
Turkey
Yildiz Porselen (1890–1936, 1994–present)
Kütahya Porselen (1970–present)
Güral Porselen (1989–present)
Porland Porselen (1976–present)
Istanbul Porselen (1963 – early 1990s)
Sümerbank Porselen (1957–1994)
United Arab Emirates
RAK Porcelain
Vietnam
Minh Long I porcelain (1970–present)
Bát Tràng porcelain (1352–present)
Europe
Austria
Vienna Porcelain Manufactory, 1718–1864
Vienna Porcelain Manufactory Augarten, 1923–present
Croatia
Inkerpor (1953–present)
Czech Republic
Haas & Czjzek, Horní Slavkov (1792–2011)
Thun 1794, Klášterec nad Ohří (1794–present)
Český porcelán a.s., Dubí, Eichwelder Porzellan und Ofenfabriken Bloch & Co. Böhmen (1864–present)
Rudolf Kämpf, Nové Sedlo (Sokolov District) (1907–present)
Denmark
Aluminia
Bing & Grøndahl
Denmark porcelain
P. Ipsens Enke
Kastrup Vaerk
Kronjyden
Porcelænshaven
Royal Copenhagen (1775–present)
GreenGate
Finland
Arabia
France
Saint-Cloud porcelain (1693–1766)
Chantilly porcelain (1730–1800)
Vincennes porcelain (1740–1756)
Mennecy-Villeroy porcelain (1745–1765)
Sèvres porcelain (1756–present)
Revol porcelain (1789–present)
Limoges porcelain
Haviland porcelain
Germany
Current porcelain manufacturers in Germany
Hungary
Hollóháza Porcelain Manufactory (1777–present)
Herend Porcelain Manufacture (1826–present)
Zsolnay Porcelain Manufacture (1853–present)
Italy
Richard-Ginori 1735 Manifattura di Doccia (1735–present)
Capodimonte porcelain (1743–1759)
Naples porcelain (1771–1806)
Manifattura Italiana Porcellane Artistiche Fabris (1922–1972)
Mangani SRL, Porcellane d'Arte (Florence)
Lithuania
Jiesia
Netherlands
(1883–1916)
Loosdrechts Porselein
Weesp Porselein
Norway
Egersund porcelain
Figgjo (1941–present)
Herrebøe porcelain
Porsgrund
Stavangerflint
Poland
AS Ćmielów
Fabryka Fajansu i Porcelany
Polskie Fabryki Porcelany "Ćmielów" i "Chodzież" S.A.
Kristoff Porcelana
Lubiana S.A.
Portugal
Vista Alegre
Sociedade Porcelanas de Alcobaça
Costa Verde (company), located in the district of Aveiro
Russia
Imperial Porcelain Factory, Saint Petersburg (1744–present)
Verbilki Porcelain (1766–present), Verbilki near Taldom
Gzhel ceramics (1802–present), Gzhel
Dulevo Farfor (1832–present), Likino-Dulyovo
Spain
Buen Retiro Royal Porcelain Factory (1760–1812)
Real Fábrica de Sargadelos (1808–present, intermittently)
Porvasal
Sweden
Rörstrand
Gustavsberg porcelain
Switzerland
Suisse Langenthal
United Kingdom
Aynsley China (1775–present)
Belleek (1884–present)
Bow porcelain factory (1747–1776)
Caughley porcelain
Chelsea porcelain factory (c. 1745; merged with Derby in 1770)
Coalport porcelain
Davenport
Goss crested china
Liverpool porcelain
Longton Hall porcelain
Lowestoft Porcelain Factory
Mintons Ltd (1793–1968; merged with Royal Doulton)
Nantgarw Pottery
New Hall porcelain
Plymouth Porcelain
Rockingham Pottery
Royal Crown Derby (1750/57–present)
Royal Doulton (1815–2009; acquired by Fiskars)
Royal Worcester (1751–2008; acquired by Portmeirion Pottery)
Spode (1767–2008; acquired by Portmeirion Pottery)
Saint James's Factory (or "Girl-in-a-Swing", 1750s)
Swansea porcelain
Vauxhall porcelain
Wedgwood, (factory 1759–present, porcelain 1812–1829, and modern. Acquired by Fiskars)
| Technology | Material and chemical | null |
167815 | https://en.wikipedia.org/wiki/Douglas%20DC-3 | Douglas DC-3 | The Douglas DC-3 is a propeller-driven airliner manufactured by the Douglas Aircraft Company, which had a lasting effect on the airline industry in the 1930s to 1940s and World War II.
It was developed as a larger, improved 14-bed sleeper version of the Douglas DC-2.
It is a low-wing metal monoplane with conventional landing gear, powered by two radial piston engines of . Although the DC-3s originally built for civil service had the Wright R-1820 Cyclone, later civilian DC-3s used the Pratt & Whitney R-1830 Twin Wasp engine.
The DC-3 has a cruising speed of , a capacity of 21 to 32 passengers or 6,000 lbs (2,700 kg) of cargo, and a range of , and can operate from short runways.
The DC-3 had many exceptional qualities compared to previous aircraft. It was fast, had a good range, was more reliable, and carried passengers in greater comfort. Before World War II, it pioneered many air travel routes. It was able to cross the continental United States from New York to Los Angeles in 18 hours, with only three stops.
It is one of the first airliners that could profitably carry only passengers without relying on mail subsidies. In 1939, at the peak of its dominance in the airliner market, around ninety percent of airline flights on the planet were by a DC-3 or some variant.
Following the war, the airliner market was flooded with surplus transport aircraft, and the DC-3 was no longer competitive because it was smaller and slower than aircraft built during the war. It was made obsolete on main routes by more advanced types such as the Douglas DC-4 and Convair 240, but the design proved adaptable and was still useful on less commercially demanding routes.
Civilian DC-3 production ended in 1943 at 607 aircraft. Military versions, including the C-47 Skytrain (the Dakota in British RAF service), and Soviet- and Japanese-built versions, brought total production to over 16,000.
Many continued to be used in a variety of niche roles; 2,000 DC-3s and military derivatives were estimated to be still flying in 2013; by 2017 more than 300 were still flying. As of 2023 it is estimated about 150 are still flying.
Design and development
"DC" stands for "Douglas Commercial". The DC-3 was the culmination of a development effort that began after an inquiry from Transcontinental and Western Airlines (TWA) to Donald Douglas. TWA's rival in transcontinental air service, United Airlines, was starting service with the Boeing 247, and Boeing refused to sell any 247s to other airlines until United's order for 60 aircraft had been filled. TWA asked Douglas to design and build an aircraft that would allow TWA to compete with United. Douglas' design, the 1933 DC-1, was promising, and led to the DC-2 in 1934. The DC-2 was a success, but with room for improvement.
The DC-3 resulted from a marathon telephone call from American Airlines CEO C. R. Smith to Donald Douglas, when Smith persuaded a reluctant Douglas to design a sleeper aircraft based on the DC-2 to replace American's Curtiss Condor II biplanes. The DC-2's cabin was wide, too narrow for side-by-side berths. Douglas agreed to go ahead with development only after Smith informed him of American's intention to purchase 20 aircraft. The new aircraft was engineered by a team led by chief engineer Arthur E. Raymond over the next two years, and the prototype DST (Douglas Sleeper Transport) first flew on December 17, 1935 (the 32nd anniversary of the Wright Brothers' flight at Kitty Hawk) with Douglas chief test pilot Carl Cover at the controls. Its cabin was wide, and a version with 21 seats instead of the 14–16 sleeping berths of the DST was given the designation DC-3. No prototype was built, and the first DC-3 built followed seven DSTs off the production line for delivery to American Airlines.
The DC-3 and DST popularized air travel in the United States. Eastbound transcontinental flights could cross the U.S. in about 15 hours with three refueling stops, while westbound trips against the wind took hours. A few years earlier, such a trip entailed short hops in slower and shorter-range aircraft during the day, coupled with train travel overnight.
Several radial engines were offered for the DC-3. Early-production civilian aircraft used either the 9-cylinder Wright R-1820 Cyclone 9 or the 14-cylinder Pratt & Whitney R-1830 Twin Wasp, but the Twin Wasp was chosen for most military versions and was also used by most DC-3s converted from military service. Five DC-3S Super DC-3s with Pratt & Whitney R-2000 Twin Wasps were built in the late 1940s, three of which entered airline service.
Production
Total production including all military variants was 16,079. More than 400 remained in commercial service in 1998. Production was:
607 civilian variants
10,048 military C-47 and C-53 derivatives built at Santa Monica, California, Long Beach, California, and Oklahoma City
4,937 built under license in the Soviet Union (1939–1950) as the Lisunov Li-2 (NATO reporting name: Cab)
487 Mitsubishi Kinsei-engined aircraft built by Showa and Nakajima in Japan (1939–1945), as the L2D Type 0 transport (Allied codename Tabby)
Production of DSTs ended in mid-1941 and civilian DC-3 production ended in early 1943, although dozens of the DSTs and DC-3s ordered by airlines that were produced between 1941 and 1943 were pressed into the US military service while still on the production line. Military versions were produced until the end of the war in 1945. A larger, more powerful Super DC-3 was launched in 1949 to positive reviews. The civilian market was flooded with second-hand C-47s, many of which were converted to passenger and cargo versions. Only five Super DC-3s were built, and three of them were delivered for commercial use. The prototype Super DC-3 served the US Navy with the designation YC-129 alongside 100 R4Ds that had been upgraded to the Super DC-3 specifications.
Turboprop conversions
From the early 1950s, some DC-3s were modified to use Rolls-Royce Dart engines, as in the Conroy Turbo Three. Other conversions featured Armstrong Siddeley Mamba or Pratt & Whitney PT6A turbines.
The Greenwich Aircraft Corp DC-3-TP is a conversion with an extended fuselage and with Pratt & Whitney Canada PT6A-65AR or PT6A-67R engines fitted.
The Basler BT-67 is a conversion of the DC-3/C-47. Basler refurbishes C-47s and DC-3s at Oshkosh, Wisconsin, fitting them with Pratt & Whitney Canada PT6A-67R turboprop engines, lengthening the fuselage by with a fuselage plug ahead of the wing, and some local strengthening of the airframe.
South Africa-based Braddick Specialised Air Services International (commonly referred to as BSAS International) has also performed Pratt & Whitney PT6 turboprop conversions, having performed modifications on over 50 DC-3/C-47s / 65ARTP / 67RTP / 67FTPs.
Operational history
American Airlines inaugurated passenger service on June 26, 1936, with simultaneous flights from Newark, New Jersey and Chicago, Illinois. Early U.S. airlines like American, United, TWA, Eastern, and Delta ordered over 400 DC-3s. These fleets paved the way for the modern American air travel industry, which eventually replaced trains as the favored means of long-distance travel across the United States. A nonprofit group, Flagship Detroit Foundation, continues to operate the only original American Airlines Flagship DC-3 with air show and airport visits throughout the U.S.
In 1936, KLM Royal Dutch Airlines received its first DC-3, which replaced the DC-2 in service from Amsterdam via Batavia (now Jakarta) to Sydney, by far the world's longest scheduled route at the time. In total, KLM bought 23 DC-3s before the war broke out in Europe. In 1941, a China National Aviation Corporation (CNAC) DC-3 pressed into wartime transportation service was bombed on the ground at Suifu Airfield in China, destroying the outer right wing. The only spare available was that of a smaller Douglas DC-2 in CNAC's workshops. The DC-2's right wing was removed, flown to Suifu under the belly of another CNAC DC-3, and bolted up to the damaged aircraft. After a single test flight, in which it was discovered that it pulled to the right due to the difference in wing sizes, the so-called DC-2½ was flown to safety.
During World War II, many civilian DC-3s were drafted for the war effort and more than 10,000 U.S. military versions of the DC-3 were built, under the designations C-47, C-53, R4D, and Dakota. Peak production was reached in 1944, with 4,853 being delivered. The armed forces of many countries used the DC-3 and its military variants for the transport of troops, cargo, and wounded. Licensed copies of the DC-3 were built in Japan as the Showa L2D (487 aircraft); and in the Soviet Union as the Lisunov Li-2 (4,937 aircraft).
After the war, thousands of cheap ex-military DC-3s became available for civilian use. Cubana de Aviación became the first Latin American airline to offer a scheduled service to Miami when it started its first scheduled international service from Havana in 1945 with a DC-3. Cubana used DC-3s on some domestic routes well into the 1960s.
Douglas developed an improved version, the Super DC-3, with more power, greater cargo capacity, and an improved wing, but with surplus aircraft available for cheap, they failed to sell well in the civilian aviation market. Only five were delivered, three of them to Capital Airlines. The U.S. Navy had 100 of its early R4Ds converted to Super DC-3 standard during the early 1950s as the Douglas R4D-8/C-117D. The last U.S. Navy C-117 was retired July 12, 1976. The last U.S. Marine Corps C-117, serial 50835, was retired from active service during June 1982. Several remained in service with small airlines in North and South America in 2006.
The United States Forest Service used the DC-3 for smoke jumping and general transportation until the last example was retired in December 2015.
A number of aircraft companies attempted to design a "DC-3 replacement" over the next three decades (including the very successful Fokker F27 Friendship), but no single type could match the versatility, rugged reliability, and economy of the DC-3. While newer airliners soon replaced it on longer high-capacity routes, it remained a significant part of air transport systems well into the 1970s as a regional airliner before being replaced by early regional jets.
DC-3 in the 21st century
Perhaps unique among prewar aircraft, the DC-3 continues to fly in active commercial and military service as of 2021, eighty-six years after the type's first flight in 1935, although the number is dwindling due to expensive maintenance and a lack of spare parts. There are small operators with DC-3s in revenue service and as cargo aircraft. Applications of the DC-3 have included passenger service, aerial spraying, freight transport, military transport, missionary flying, skydiver shuttling and sightseeing. There have been a very large number of civil and military operators of the DC-3/C-47 and related types, which would have made it impracticable to provide a comprehensive listing of all operators.
A common saying among aviation enthusiasts and pilots is "the only replacement for a DC-3 is another DC-3".
Its ability to use grass or dirt runways makes it popular in developing countries or remote areas, where runways may be unpaved.
The oldest surviving DST is N133D, the sixth Douglas Sleeper Transport built, manufactured in 1936. This aircraft was delivered to American Airlines on 12 July 1936 as NC16005. In 2011 it was at Shell Creek Airport, Punta Gorda, Florida. It has been repaired and has been flying again, with a recent flight on 25 April 2021. The oldest DC-3 still flying is the original American Airlines Flagship Detroit (c/n 1920, the 43rd aircraft off the Santa Monica production line, delivered on 2 March 1937), which appears at airshows around the United States and is owned and operated by the Flagship Detroit Foundation.
The base price of a new DC-3 in 1936 was around $60,000–$80,000, and by 1960 used aircraft were available for $75,000. In 2023, flying DC-3s can be bought from $400,000-$700,000.
As of 2024, the Basler BT-67 with additions to handle cold weather and snow runways are used in Antarctica including regularly landing at the South Pole during the austral summer.
DC-3 on display in museums
Douglas C-47-DL serial number 41-7723 is on display at Pima Air & Space Museum near Tucson, Arizona. The aircraft was previously displayed at the United States Air Force Museum.
Original operators
Variants
Civil
DST
Douglas Sleeper Transport; the initial variant with two Wright R-1820 Cyclone engines and standard sleeper accommodation for up to 16 with small upper windows, convertible to carry up to 24 day passengers.
DST-A
DST with Pratt & Whitney R-1830 Twin Wasp engines
DC-3
Initial non-sleeper variant; with 21 day-passenger seats, Wright R-1820 Cyclone engines, no upper windows.
DC-3A
DC-3 with Pratt & Whitney R-1830 Twin Wasp engines.
DC-3B
Version of DC-3 for TWA, with two Wright R-1820 Cyclone engines and smaller convertible sleeper cabin forward with fewer upper windows than DST.
Designation for ex-military C-47, C-53, and R4D aircraft rebuilt by Douglas Aircraft in 1946, given new manufacturer numbers, and sold on the civil market; Pratt & Whitney R-1830 engines.
DC-3D
Designation for 28 new aircraft completed by Douglas in 1946 with unused components from the cancelled USAAF C-117 production line; Pratt & Whitney R-1830 engines.
Also known as Super DC-3, substantially redesigned DC-3 with fuselage lengthened by ; outer wings of a different shape with squared-off wingtips and shorter span; distinctive taller rectangular tail; and fitted with more powerful Pratt & Whitney R-2000 or Wright R-1820 Cyclone engines. Five completed by Douglas for civil use using existing surplus secondhand airframes. Three Super DC-3s were operated by Capital Airlines 1950–1952. Designation also used for examples of the 100 R4Ds that had been converted by Douglas to this standard for the U.S. Navy as R4D-8s (later designated C-117Ds), all fitted with more powerful Wright R-1820 Cyclone engines, some of which entered civil use after retirement from military service.
Military
C-41, C-41A
The C-41 was the first DC-3 to be ordered by the USAAC and was powered by two Pratt & Whitney R-1830-21 engines. It was delivered in October 1938 for use by United States Army Air Corps (USAAC) chief General Henry H. Arnold with the passenger cabin fitted out in a 14-seat VIP configuration. The C-41A was a single VIP DC-3A supplied to the USAAC in September 1939, also powered by R-1830-21 engines; and used by the Secretary of War. The forward cabin converted to sleeper configuration with upper windows similar to the DC-3B.
C-48
Various DC-3A and DST models; 36 impressed as C-48, C-48A, C-48B, and C-48C.
C-48 - 1 impressed ex-United Airlines DC-3A.
C-48A - 3 impressed DC-3As with 18-seat interiors.
C-48B - 16 impressed ex-United Airlines DST-A air ambulances with 16-berth interiors.
C-48C - 16 impressed DC-3As with 21-seat interiors.
C-49
Various DC-3 and DST models; 138 impressed into service as C-49, C-49A, C-49B, C-49C, C-49D, C-49E, C-49F, C-49G, C-49H, C-49J, and C-49K.
C-50
Various DC-3 models, fourteen impressed as C-50, C-50A, C-50B, C-50C, and C-50D.
C-51
One impressed aircraft originally ordered by Canadian Colonial Airlines, had starboard-side door.
C-52
DC-3A aircraft with R-1830 engines, five impressed as C-52, C-52A, C-52B, C-52C, and C-52D.
C-68
Two DC-3As impressed with 21-seat interiors.
C-84
One impressed DC-3B aircraft.
Dakota II
British Royal Air Force designation for impressed DC-3s.
LXD1
A single DC-3 supplied for evaluation by the Imperial Japanese Navy Air Service (IJNAS).
R4D-2
Two Eastern Air Lines DC-3-388s impressed into United States Navy (USN) service as VIP transports, later designated R4D-2F and later R4D-2Z.
R4D-4
Ten DC-3As impressed for use by the USN.
R4D-4R
Seven DC-3s impressed as staff transports for the USN.
R4D-4Q
Radar countermeasures version of R4D-4 for the USN.
XCG-17
Experimental assault glider, one converted.
Conversions
Dart-Dakota for BEA test services, powered by two Rolls-Royce Dart turboprop engines.
Mamba-Dakota A single conversion for the Ministry of Supply, powered by two Armstrong-Siddeley Mamba turboprop engines.
Airtech DC-3/2000
DC-3/C-47 engine conversion by Airtech Canada, first offered in 1987. Powered by two PZL ASz-62IT radial engines.
Basler BT-67
DC-3/C-47 conversion with a stretched fuselage, strengthened structure, modern avionics, and powered by two Pratt & Whitney Canada PT-6A-67R turboprop engines.
BSAS C-47TP Turbo Dakota
A South African C-47 conversion for the South African Air Force by Braddick Specialised Air Services, with two Pratt & Whitney Canada PT6A-65R turboprop engines, revised systems, stretched fuselage, and modern avionics.
Conroy Turbo-Three
One DC-3/C-47 converted by Conroy Aircraft with two Rolls-Royce Dart Mk. 510 turboprop engines.
Conroy Super-Turbo-Three
Same as the Turbo Three but converted from a Super DC-3. One converted.
Conroy Tri-Turbo-Three
Conroy Turbo Three further modified by the removal of the two Rolls-Royce Dart engines and their replacement by three Pratt & Whitney Canada PT6s (one mounted on each wing and one in the nose).
Greenwich Aircraft Corp Turbo Dakota DC-3
DC-3/C-47 conversion with a stretched fuselage, strengthened wing center section, updated systems, and powered by two Pratt & Whitney Canada PT6A-65AR turboprop engines.
TS-62
Douglas-built C-47s fitted with Russian Shvetsov ASh-62 radial engines after World War II due to shortage of American engines in the Soviet Union. Some TS-62s featured a small extra cockpit window on the left side.
TS-82
Similar to TS-62, but with 1650 hp Shvetsov ASh-82 radial engines.
USAC DC-3 Turbo Express
A turboprop conversion by the United States Aircraft Corporation, fitting Pratt & Whitney Canada PT6A-45R turboprop engines with an extended forward fuselage to maintain center of gravity. First flight of the prototype conversion, (N300TX), was on July 29, 1982.
Military and foreign derivatives
Douglas C-47 Skytrain and C-53 Skytrooper
Production military DC-3A variants.
Showa and Nakajima L2D
Developments manufactured under license in Japan by Nakajima and Showa for the IJNAS; 487 built.
Lisunov Li-2 and PS-84
Developments manufactured under license in the USSR; 4,937 built.
Accidents and incidents
Specifications (DC-3A-S1C3G)
Notable appearances in media
Due to the large number produced; Golden Age of Aviation and World War II significance; and nearly a century of service in passenger, cargo, and military roles throughout the world; the aircraft maintains significant popular interest and has appeared in numerous works of fiction.
A decommissioned DC-3 is part of the seating area at a McDonald's in Taupō, New Zealand.
| Technology | Specific aircraft_2 | null |
167883 | https://en.wikipedia.org/wiki/Liliaceae | Liliaceae | The lily family, Liliaceae, consists of about 15 genera and 610 species of flowering plants within the order Liliales. They are monocotyledonous, perennial, herbaceous, often bulbous geophytes. Plants in this family have evolved with a fair amount of morphological diversity despite genetic similarity. Common characteristics include large flowers with parts arranged in threes: with six colored or patterned petaloid tepals (undifferentiated petals and sepals) arranged in two whorls, six stamens and a superior ovary. The leaves are linear in shape, with their veins usually arranged parallel to the edges, single and arranged alternating on the stem, or in a rosette at the base. Most species are grown from bulbs, although some have rhizomes. First described in 1789, the lily family became a paraphyletic "catch-all" (wastebasket) group of lilioid monocots that did not fit into other families and included a great number of genera now included in other families and in some cases in other orders. Consequently, many sources and descriptions labelled "Liliaceae" deal with the broader sense of the family.
The family evolved approximately 68 million years ago during the Late Cretaceous to Early Paleogene epochs. Liliaceae are widely distributed, mainly in temperate regions of the Northern Hemisphere and the flowers are insect pollinated. Many Liliaceae are important ornamental plants, widely grown for their attractive flowers and involved in a major floriculture of cut flowers and dry bulbs. Some species are poisonous if eaten and can have adverse health effects in humans and household pets.
A number of Liliaceae genera are popular cultivated plants in private and public spaces. Lilies and tulips in particular have had considerable symbolic and decorative value, and appear frequently in paintings and the decorative arts. They are also an economically important product. Most of their genera, Lilium in particular, face considerable herbivory pressure from deer in some areas, both wild and domestic.
Description
The diversity of characteristics complicates any description of the Liliaceae morphology, and confused taxonomic classification for centuries. The diversity is also of considerable evolutionary significance, as some members emerged from shaded areas and adapted to a more open environment (see Evolution).
General
The Liliaceae are characterised as monocotyledonous, perennial, herbaceous, bulbous (or rhizomatous in the case of Medeoleae) flowering plants with simple trichomes (root hairs) and contractile roots. The flowers may be arranged (inflorescence) along the stem, developing from the base, or as a single flower at the tip of the stem, or as a cluster of flowers. They contain both male (androecium) and female (gynoecium) characteristics and are symmetric radially, but sometimes as a mirror image. Most flowers are large and colourful, except for Medeoleae. Both the petals and sepals are usually similar and appear as two concentric groups (whorls) of 'petals', that are often striped or multi-coloured, and produce nectar at their bases. The stamens are usually in two groups of three (trimerous) and the pollen has a single groove (monosulcate). The ovary is placed above the attachment of the other parts (superior). There are three fused carpels (syncarpus) with one to three chambers (locules), a single style and a three-lobed stigma. The embryo sac is of the Fritillaria type. The fruit is generally a wind dispersed capsule, but occasionally a berry (Medeoleae) which is dispersed by animals. The leaves are generally simple and elongated with veins parallel to the edges, arranged singly and alternating on the stem, but may form a rosette at the base of the stem.
Specific
Inflorescence Usually indeterminate (lacking terminal flower) as a raceme (Lilium); sometimes reduced to a single terminal flower (Tulipa). When pluriflor (multiple blooms), the flowers are arranged in a cluster or rarely are subumbellate (Gagea) or a thyrse (spike).
Flowers Hermaphroditic, actinomorphic (radially symmetric) or slightly zygomorphic (bilaterally symmetric), pedicellate (on a short secondary stem), generally large and showy but may be inconspicuous : (Medeoleae). Bracts may (bracteate) or may not (ebracteate) be present. The perianth is undifferentiated (perigonium) and biseriate (two whorled), formed from six tepals arranged into two separate whorls of three parts (trimerous) each, although Scoliopus has only three petals, free from the other parts, but overlapping. The tepals are usually petaloid (petal like) and apotepalous (free) with lines (striate) or marks in other colors or shades. The perianth is either homochlamydeous (all tepals equal, e.g. Fritillaria) or dichlamydeous (two separate and different whorls, e.g. Calochortus) and may be united into a tube. Nectar is produced in perigonal nectaries at the base of the tepals.
Androecium Six stamens in two trimerous whorls, with free filaments, usually epiphyllous (fused to tepals) and diplostemonous (outer whorl of stamens opposite outer tepals and the inner whorl opposite inner tepals), although Scoliopus has three stamens opposite the outer tepals. The attachment of the anthers to the filaments may be either peltate (to the surface) or pseudo-basifixed (surrounding the filament tip, but not adnate, that is not fused) and dehisce longitudinally and are extrorse (dehiscing away from center). The pollen is usually monosulcate (single groove), but may be inaperturate (lacking aperture: Clintonia, some Tulipa spp.) or operculate (lidded: Fritillaria, some Tulipa spp.), and reticulate (net patterned: Erythronium, Fritillaria, Gagea, Lilium, Tulipa).
Gynoecium Superior ovary (hypogynous), syncarpous (with fused carpels), with three connate (fused) carpels and is trilocular (three locules, or chambers) or unilocular (single locule, as in Scoliopus and Medeola). There is a single style and a three lobed stigma or three stigmata more or less elongated along the style. There are numerous anatropous (curved) ovules which display axile placentation (parietal in Scoliopus and Medeola), usually with an integument and thinner megasporangium. The embryo sac (megagametophyte) varies by genera, but is mainly tetrasporic (e.g. Fritillaria). Embryo sacs in which three of the four megaspores fuse to form a triploid nucleus, are referred to as Fritillaria-type, a characteristic shared by all the core Liliales.
Fruit A capsule that is usually loculicidal (splitting along the locules) as in the Lilioideae, but occasionally septicidal (splitting between them, along the separating septa) in the Calachortoideae and wind dispersed, although the Medeoleae form berries (baccate). The seeds may be flat, oblong, angular, discoid, ellipsoid or globose (spherical), or compressed with a well developed epidermis. The exterior may be smooth or roughened, with a wing or raphe (ridge), aril or one to two tails, rarely hairy, but may be dull or shiny and the lack of a black integument distinguishes them from related taxa such as Allioideae that were previously included in this family, and striate (parallel longitudinally ridged) in the Steptopoideae. The hilum (scar) is generally inconspicuous. The bitegmic (separate testa and tegmen) seed coat itself may be thin, suberose (like cork), or crustaceous (hard or brittle). The endosperm is abundant, cartilaginous (fleshy) or horny and contains oils and aleurone but not starch (non-farinaceous). Its cells are polyploid (triploid or pentaploid, depending on the embryo sac type). The embryo is small (usually less than one quarter of seed volume), axile (radially sectioned), linear (longer than broad) or rarely rudimentary (tiny relative to endosperm) depending on placentation type, and straight, bent, curved or curled at the upper end.
Leaves Simple, entire (smooth and even), linear, oval to filiform (thread-like), mostly with parallel veins, but occasionally net-veined. They are alternate (single and alternating direction) and spiral, but may be whorled (three or more attached at one node, e.g. Lilium, Fritillaria), cauline (arranged along the aerial stem) or sheathed in a basal rosette. They are rarely petiolate (stem attached before apex), and lack stipules. The aerial stem is unbranched.
Genome The Liliaceae include a species with one of the largest genome size within the angiosperms, Fritillaria assyriaca (1C=127.4 pg), while Tricyrtis macropoda is as small as 4.25 pg. Chromosome numbers vary by genus. Some genera like Calochortus (x=6-10), Prosartes (6,8,9,11), Scoliopus (7,8), Streptopus (8, 27) and Tricyrtis (12-13) have a small and variable number of chromosomes while subfamily Lilioideae have a larger and more stable chromosome number (12) as have the Medeoleae (7).
Phytochemistry The seeds contain saponins but no calcium oxalate raphide crystals, chelidonic acid (unlike Asparagales) or cysteine derived sulphur compounds (allyl sulphides), another distinguishing feature from the characteristic alliaceous odour of the Allioideae. Fritillaria in particular contains steroidal alkaloids of the cevanine and solanum type. Solanidine and solanthrene alkaloids have been isolated from some Fritillaria species. Tulipa contains tulipanin, an anthocyanin. (see also: Toxicology)
Characteristics often vary by habitat, between shade-dwelling genera (such as Prosartes, Tricyrtis, Cardiocrinum, Clintonia, Medeola, Prosartes, and Scoliopus) and sun loving genera. Shade-dwelling genera usually have broader leaves with smooth edges and net venation, and fleshy fruits (berries) with animal-dispersed seeds, rhizomes, and small, inconspicuous flowers while genera native to sunny habitats usually have narrow, parallel-veined leaves, capsular fruits with wind-dispersed seeds, bulbs, and large, visually conspicuous flowers. ( | Biology and health sciences | Monocots | null |
167906 | https://en.wikipedia.org/wiki/Cultivar | Cultivar | A cultivar is a kind of cultivated plant that people have selected for desired traits and which retains those traits when propagated. Methods used to propagate cultivars include division, root and stem cuttings, offsets, grafting, tissue culture, or carefully controlled seed production. Most cultivars arise from deliberate human manipulation, but some originate from wild plants that have distinctive characteristics. Cultivar names are chosen according to rules of the International Code of Nomenclature for Cultivated Plants (ICNCP), and not all cultivated plants qualify as cultivars. Horticulturists generally believe the word cultivar was coined as a term meaning "cultivated variety".
Popular ornamental plants like roses, camellias, daffodils, rhododendrons, and azaleas are commonly cultivars produced by breeding and selection or as sports, for floral colour or size, plant form, or other desirable characteristics. Similarly, the world's agricultural food crops are almost exclusively cultivars that have been selected for characters such as improved yield, flavour, and resistance to disease, and very few wild plants are now used as food sources. Trees used in forestry are also special selections grown for their enhanced quality and yield of timber.
Cultivars form a major part of Liberty Hyde Bailey's broader group, the cultigen, which is defined as a plant whose origin or selection is primarily due to intentional human activity. A cultivar is not the same as a botanical variety, which is a taxonomic rank below subspecies, and there are differences in the rules for creating and using the names of botanical varieties and cultivars. In recent times, the naming of cultivars has been complicated by the use of statutory patents for plants and recognition of plant breeders' rights.
The International Union for the Protection of New Varieties of Plants (UPOV – ) offers legal protection of plant cultivars to persons or organisations that introduce new cultivars to commerce. UPOV requires that a cultivar be "distinct", "uniform", and "stable". To be "distinct", it must have characters that easily distinguish it from any other known cultivar. To be "uniform" and "stable", the cultivar must retain these characters in repeated propagation.
The naming of cultivars is an important aspect of cultivated plant taxonomy, and the correct naming of a cultivar is prescribed by the Rules and Recommendations of the International Code of Nomenclature for Cultivated Plants (ICNCP, commonly denominated the Cultivated Plant Code). A cultivar is given a cultivar name, which consists of the scientific Latin botanical name followed by a cultivar epithet. The cultivar epithet is usually in a vernacular language.
Etymology
The word cultivar originated from the need to distinguish between wild plants and those with characteristics that arose in cultivation, presently denominated cultigens. This distinction dates to the Greek philosopher Theophrastus (370–285 BC), the "Father of Botany", who was keenly aware of this difference. Botanical historian Alan Morton noted that Theophrastus in his Historia Plantarum (Enquiry into Plants) "had an inkling of the limits of culturally induced (phenotypic) changes and of the importance of genetic constitution" (Historia Plantarum, Book 3, 2, 2 and Causa Plantarum, Book 1, 9, 3).
The International Code of Nomenclature for algae, fungi, and plants uses as its starting point for modern botanical nomenclature the Latin names in Linnaeus' (1707–1778) Species Plantarum (tenth edition) and Genera Plantarum (fifth edition). In Species Plantarum, Linnaeus enumerated all plants known to him, either directly or from his extensive reading. He recognised the rank of varietas (botanical "variety", a rank below that of species and subspecies) and he indicated these varieties with letters of the Greek alphabet, such as α, β, and λ, before the varietal name, rather than using the abbreviation "var." as is the present convention. Most of the varieties that Linnaeus enumerated were of "garden" origin rather than being wild plants.
In time the need to distinguish between wild plants and those with variations that had been cultivated increased. In the nineteenth century many "garden-derived" plants were given horticultural names, sometimes in Latin and sometimes in a vernacular language. From circa the 1900s, cultivated plants in Europe were recognised in the Scandinavian, Germanic, and Slavic literature as stamm or sorte, but these words could not be used internationally because, by international agreement, any new denominations had to be in Latin. In the twentieth century an improved international nomenclature was proposed for cultivated plants.
Liberty Hyde Bailey of Cornell University in New York, United States created the word cultivar in 1923 when he wrote that:
In that essay, Bailey used only the rank of species for the cultigen, but it was obvious to him that many domesticated plants were more like botanical varieties than species, and that realization appears to have motivated the suggestion of the new category of cultivar.
Bailey created the word cultivar. It is generally assumed to be a blend of cultivated and variety but Bailey never explicitly stated the etymology and it has been suggested that the word is actually a blend of cultigen and variety. The neologism cultivar was promoted as "euphonious" and "free from ambiguity". The first Cultivated Plant Code of 1953 subsequently commended its use, and by 1960 it had achieved common international acceptance.
Cultigens
The words cultigen and cultivar may be confused with each other. A cultigen is any plant that is deliberately selected for or altered in cultivation, as opposed to an indigen; the Cultivated Plant Code states that cultigens are "maintained as recognisable entities solely by continued propagation". Cultigens can have names at any of many taxonomic ranks, including those of grex, species, cultivar group, variety, form, and cultivar; and they may be plants that have been altered in cultivation, including by genetic modification, but have not been formally denominated. A cultigen or a component of a cultigen can be accepted as a cultivar if it is recognisable and has stable characters. Therefore, all cultivars are cultigens, because they are cultivated, but not all cultigens are cultivars, because some cultigens have not been formally distinguished and named as cultivars.
Formal definition
The Cultivated Plant Code notes that the word cultivar is used in two different senses: first, as a "classification category" the cultivar is defined in Article 2 of the International Code of Nomenclature for Cultivated Plants (2009, 8th edition) as follows: The basic category of cultivated plants whose nomenclature is governed by this Code is the cultivar. There are two other classification categories for cultigens, the grex and the group. The Code then defines a cultivar as a "taxonomic unit within the classification category of cultivar". This is the sense of cultivar that is most generally understood and which is used as a general definition.
Different kinds
Which plants are chosen to be named as cultivars is simply a matter of convenience as the category was created to serve the practical needs of horticulture, agriculture, and forestry.
Members of a particular cultivar are not necessarily genetically identical. The Cultivated Plant Code emphasizes that different cultivated plants may be accepted as different cultivars, even if they have the same genome, while cultivated plants with different genomes may be regarded as the same cultivar. The production of cultivars generally entails considerable human involvement although in a few cases it may be as little as simply selecting variation from plants growing in the wild (whether by collecting growing tissue to propagate from or by gathering seed).
Cultivars generally occur as ornamentals and food crops: Malus 'Granny Smith' and Malus 'Red Delicious' are cultivars of apples propagated by cuttings or grafting, Lactuca 'Red Sails' and Lactuca 'Great Lakes' are lettuce cultivars propagated by seeds. Named cultivars of Hosta and Hemerocallis plants are cultivars produced by micropropagation or division.
Clones
Cultivars that are produced asexually are genetically identical and known as clones; this includes plants propagated by division, layering, cuttings, grafts, and budding. The propagating material may be taken from a particular part of the plant, such as a lateral branch, or from a particular phase of the life cycle, such as a juvenile leaf, or from aberrant growth as occurs with witch's broom. Plants whose distinctive characters are derived from the presence of an intracellular organism may also form a cultivar provided the characters are reproduced reliably from generation to generation. Plants of the same chimera (which have mutant tissues close to normal tissue) or graft-chimeras (which have vegetative tissue from different kinds of plants and which originate by grafting) may also constitute a cultivar.
Seed-produced
Some cultivars "come true from seed", retaining their distinguishing characteristics when grown from seed. Such plants are termed a "variety", "selection", or "strain" but these are ambiguous and confusing words that are best avoided. In general, asexually propagated cultivars grown from seeds produce highly variable seedling plants, and should not be labelled with, or sold under, the parent cultivar's name.
Seed-raised cultivars may be produced by uncontrolled pollination when characteristics that are distinct, uniform and stable are passed from parents to progeny. Some are produced as "lines" that are produced by repeated self-fertilization or inbreeding or "multilines" that are made up of several closely related lines. Sometimes they are F1 hybrids which are the result of a deliberate repeatable single cross between two pure lines. A few F2 hybrid seed cultivars also exist, such as Achillea 'Summer Berries'.
Some cultivars are agamospermous plants, which retain their genetic composition and characteristics under reproduction. Occasionally cultivars are raised from seed of a specially selected provenance – for example the seed may be taken from plants that are resistant to a particular disease.
Genetically modified
Genetically modified plants with characteristics resulting from the deliberate implantation of genetic material from a different germplasm may form a cultivar. However, the International Code of Nomenclature for Cultivated Plants notes, "In practice such an assemblage is often marketed from one or more lines or multilines that have been genetically modified. These lines or multilines often remain in a constant state of development which makes the naming of such an assemblage as a cultivar a futile exercise." However, retired transgenic varieties such as the fish tomato, which are no longer being developed, do not run into this obstacle and can be given a cultivar name.
Cultivars may be selected because of a change in the ploidy level of a plant which may produce more desirable characteristics.
Cultivar names
Every unique cultivar has a unique name within its denomination class (which is almost always the genus). Names of cultivars are regulated by the International Code of Nomenclature for Cultivated Plants, and may be registered with an International Cultivar Registration Authority (ICRA). There are sometimes separate registration authorities for different plant types such as roses and camellias. In addition, cultivars may be associated with commercial marketing names referred to in the Cultivated Plant Code as "trade designations" (see below).
Presenting in text
A cultivar name consists of a botanical name (of a genus, species, infraspecific taxon, interspecific hybrid or intergeneric hybrid) followed by a cultivar epithet. The cultivar epithet is enclosed by single quotes; it should not be italicized if the botanical name is italicized; and each of the words within the epithet is capitalized (with some permitted exceptions such as conjunctions). It is permissible to place a cultivar epithet after a common name provided the common name is botanically unambiguous. Cultivar epithets published before 1 January 1959 were often given a Latin form and can be readily confused with the specific epithets in botanical names; after that date, newly coined cultivar epithets must be in a modern vernacular language to distinguish them from botanical epithets.
For example, the full cultivar name of the King Edward potato is Solanum tuberosum 'King Edward'. 'King Edward' is the cultivar epithet, which, according to the Rules of the Cultivated Plant Code, is bounded by single quotation marks. For patented or trademarked plant product lines developed from a given cultivar, the commercial product name is typically indicated by the symbols "TM" or "®", or is presented in capital letters with no quotation marks, following the cultivar name, as in the following example, where "Bloomerang" is the commercial name and 'Penda' is the cultivar epithet: Syringa 'Penda' BLOOMERANG.
Examples of correct text presentation:
Cryptomeria japonica 'Elegans'
Chamaecyparis lawsoniana 'Aureomarginata' (pre-1959 name, Latin in form)
Chamaecyparis lawsoniana 'Golden Wonder' (post-1959 name, English language)
Pinus densiflora 'Akebono' (post-1959 name, Japanese language)
Apple 'Sundown'
Some incorrect text presentation examples:
Cryptomeria japonica "Elegans" (double quotes are unacceptable)
Berberis thunbergii cv. 'Crimson Pygmy' (this once-common usage is now unacceptable, as it is no longer correct to use "cv." in this context; Berberis thunbergii 'Crimson Pygmy' is correct)
Rosa cv. 'Peace' (this is now incorrect for two reasons: firstly, the use of "cv."; secondly, "Peace" is a trade designation or "selling name" for the cultivar R. 'Madame A. Meilland' and should therefore be printed in a different typeface from the rest of the name, without quote marks, for example: Rosa Peace)
Although "cv." has not been permitted by the International Code of Nomenclature for Cultivated Plants since the 1995 edition, it is still widely used and recommended by other authorities.
Group names
Where several very similar cultivars exist they can be associated into a Group (formerly Cultivar-group). As Group names are used with cultivar names it is necessary to understand their way of presentation. Group names are presented in normal type and the first letter of each word capitalised as for cultivars, but they are not placed in single quotes. When used in a name, the first letter of the word "Group" is itself capitalized.
Presenting in text
Brassica oleracea Capitata Group (the group of cultivars including all typical cabbages)
Brassica oleracea Botrytis Group (the group of cultivars including all typical cauliflowers)
Hydrangea macrophylla Groupe Hortensia (in French) = Hydrangea macrophylla Hortensia Group (in English)
Where cited with a cultivar name the group should be enclosed in parentheses, as follows:
Hydrangea macrophylla (Hortensia Group) 'Ayesha'
Legal protection of cultivars and their names
Since the 1990s there has been an increasing use of legal protection for newly produced cultivars. Plant breeders expect legal protection for the cultivars they produce. According to proponents of such protections, if other growers can immediately propagate and sell these cultivars as soon as they come on the market, the breeder's benefit is largely lost.
Legal protection for cultivars is obtained through the use of Plant breeders' rights and plant Patents but the specific legislation and procedures needed to take advantage of this protection vary from country to country.
Controversial use of legal protection for cultivars
The use of legal protection for cultivars can be controversial, particularly for food crops that are staples in developing countries, or for plants selected from the wild and propagated for sale without any additional breeding work; some people consider this practice unethical.
Trade designations and selling names
The formal scientific name of a cultivar, like Solanum tuberosum 'King Edward', is a way of uniquely designating a particular kind of plant. This scientific name is in the public domain and cannot be legally protected. Plant retailers wish to maximize their share of the market and one way of doing this is to replace the Latin scientific names on plant labels in retail outlets with appealing marketing names that are easy to use, pronounce, and remember. Marketing names lie outside the scope of the Cultivated Plant Code which refers to them as "trade designations". If a retailer or wholesaler has the sole legal rights to a marketing name then that may offer a sales advantage. Plants protected by plant breeders' rights (PBR) may have a "true" cultivar name – the recognized scientific name in the public domain – and a "commercial synonym" – an additional marketing name that is legally protected. An example would be Rosa = 'Poulmax', in which Rosa is the genus, is the trade designation, and 'Poulmax' is scientific cultivar name.
Because a name that is attractive in one language may have less appeal in another country, a plant may be given different selling names from country to country. Quoting the original cultivar name allows the correct identification of cultivars around the world.
The main body coordinating plant breeders' rights is the International Union for the Protection of New Varieties of Plants (, UPOV) and this organization maintains a database of new cultivars protected by PBR in all countries.
International Cultivar Registration Authorities
An International Cultivar Registration Authority (ICRA) is a voluntary, non-statutory organization appointed by the Commission for Nomenclature and Cultivar Registration of the International Society of Horticultural Science. ICRAs are generally formed by societies and institutions specializing in particular plant genera such as Dahlia or Rhododendron and are currently located in Europe, North America, China, India, Singapore, Australia, New Zealand, South Africa and Puerto Rico.
Each ICRA produces an annual report and its reappointment is considered every four years. The main task is to maintain a register of the names within the group of interest and where possible this is published and placed in the public domain. One major aim is to prevent the duplication of cultivar and Group epithets within a genus, as well as ensuring that names are in accord with the latest edition of the Cultivated Plant Code. In this way, over the last 50 years or so, ICRAs have contributed to the stability of cultivated plant nomenclature. In recent times many ICRAs have also recorded trade designations and trademarks used in labelling plant material, to avoid confusion with established names.
New names and other relevant data are collected by and submitted to the ICRA and in most cases there is no cost. The ICRA then checks each new epithet to ensure that it has not been used before and that it conforms with the Cultivated Plant Code. Each ICRA also ensures that new names are formally established (i.e. published in hard copy, with a description in a dated publication). They record details about the plant, such as parentage, the names of those concerned with its development and introduction, and a basic description highlighting its distinctive characters. ICRAs are not responsible for assessing the distinctiveness of the plant in question. Most ICRAs can be contacted electronically and many maintain web sites for an up-to-date listing.
| Technology | Basics_2 | null |
168008 | https://en.wikipedia.org/wiki/Orchard | Orchard | An orchard is an intentional plantation of trees or shrubs that is maintained for food production. Orchards comprise fruit- or nut-producing trees that are generally grown for commercial production. Orchards are also sometimes a feature of large gardens, where they serve an aesthetic as well as a productive purpose. A fruit garden is generally synonymous with an orchard, although it is set on a smaller, non-commercial scale and may emphasize berry shrubs in preference to fruit trees. Most temperate-zone orchards are laid out in a regular grid, with a grazed or mown grass or bare soil base that makes maintenance and fruit gathering easy.
Most modern commercial orchards are planted for a single variety of fruit. While the importance of introducing biodiversity is recognized in forest plantations, introducing genetic diversity in orchard plantations by interspersing other trees might offer benefits. Genetic diversity in an orchard would provide resilience to pests and diseases, just as in forests.
Orchards are sometimes concentrated near bodies of water where climatic extremes are moderated and blossom time is retarded until frost danger is past.
Layout
An orchard's layout is the technique of planting the crops in a proper system.
There are different methods of planting and thus different layouts. Some of these layout types are:
Square method
Rectangular method
Quincunx method
Triangular method
Hexagonal method
Contour or terrace method
For different varieties, these systems may vary to some extent.
Orchards by region
]
The most extensive orchards in the United States are apple and orange orchards, although citrus orchards are more commonly called groves. The most extensive apple orchard area is in eastern Washington state, with a lesser but significant apple orchard area in most of Upstate New York. Extensive orange orchards are found in Florida and southern California, where they are more widely known as "groves". In eastern North America, many orchards are along the shores of Lake Michigan (such as the Fruit Ridge Region), Lake Erie, and Lake Ontario.
In Canada, apple and other fruit orchards are widespread on the Niagara Peninsula, south of Lake Ontario. This region is known as Canada Fruitbelt and, in addition to large-scale commercial fruit marketing, it encourages "pick-your-own" activities in the harvest season.
In Spain, Murcia is a major orchard area (or la huerta) in Europe, with citrus crops. New Zealand, China, Argentina, and Chile also have extensive apple orchards.
Tenbury Wells in Worcestershire has been called The Town in the Orchard, since the 19th century, because it was surrounded by extensive orchards. Today, this heritage is celebrated through an annual Applefest.
Central Europe
Streuobstwiese (pl. ) is a German word that means a meadow with scattered fruit trees or fruit trees that are planted in a field. , or a meadow orchard, is a traditional landscape in the temperate, maritime climate of continental Western Europe. In the 19th and early 20th centuries, were a kind of a rural community orchard that were intended for the productive cultivation of stone fruit. In recent years, ecologists have successfully lobbied for state subsidies to valuable habitats, biodiversity and natural landscapes, which are also used to preserve old meadow orchards. Both conventional and meadow orchards provide a suitable habitat for many animal species that live in a cultured landscape. A notable example is the hoopoe that nests in tree hollows of old fruit trees and, in the absence of alternative nesting sites, is threatened in many parts of Europe because of the destruction of old orchards.
Historical orchards
Orchard House in Concord, Massachusetts, was the residence of American celebrated writer Louisa May Alcott.
Fruita, Utah, part of Capitol Reef National Park has Mormon pioneer orchards maintained by the United States National Park Service.
Modern orchards
Historical orchards have large, mature trees spaced for heavy equipment. Modern commercial apple orchards, by contrast and as one example, are often "high-density" (tree density above ), and in extreme cases have up to . These plants are no longer trees in the traditional sense, but instead resemble vines on dwarf stock and require trellises to support them.
Now new "Smart Orchards" are being set up throughout the world. The first examples of such orchards are the Smart Orchard at Washington , United States of America by Innov8 and Washington State University and Samriti Bagh orchard created in Maraog, India by Tejasvi Dogra that incorporates the use of various sensors for orchard management.
Orchard conservation in the UK
Natural England, through its Countryside Stewardship Scheme, Environmental Stewardship and Environmentally Sensitive Areas Scheme, gives grant aid and advice for the maintenance, enhancement or re-creation of historical orchards.
The Orchard Link organisation provides advice on how to manage and restore the county of Devon's orchards, as well as enabling the local community to use the local orchard produce. An organisation called Orchards Live carries out similar work in North Devon.
People's Trust for Endangered Species (PTES) has mapped every traditional orchard within England and Wales and manages the national inventory for this habitat.
The UK Biodiversity Partnership lists traditional orchards and a priority UK Biodiversity Action Plan habitat.
The Wiltshire Traditional Orchards Project maps, conserves and restores traditional orchards within Wiltshire, England.
An interim report from the National Trust showed that orchards had reduced in scale from approximately 95,000 hectares in the period 1892–1914, to 41,000 hectares overall in 2022. The campaign #BlossomWatch is part of a wider programme of work by the Trust to plant 68 new orchards by 2025, and four million trees with blossom by 2030.
Notable people
Belle van Dorn Harbert (born 1860), fruit farmer, founder of International Congress of Farm Women
| Technology | Forms | null |
168010 | https://en.wikipedia.org/wiki/Morus%20%28plant%29 | Morus (plant) | Morus, a genus of flowering plants in the family Moraceae, consists of 19 species of deciduous trees commonly known as mulberries, growing wild and under cultivation in many temperate world regions. Generally, the genus has 64 subordinate taxa, though the three most common are referred to as white, red, and black, originating from the color of their dormant buds and not necessarily the fruit color (Morus alba, M. rubra, and M. nigra, respectively), with numerous cultivars and some taxa currently unchecked and awaiting taxonomic scrutiny. M. alba is native to South Asia, but is widely distributed across Europe, Southern Africa, South America, and North America. M. alba is also the species most preferred by the silkworm, and is regarded as an invasive species in Brazil and the United States.
The closely related genus Broussonetia is also commonly known as mulberry, notably the paper mulberry (Broussonetia papyrifera).
Despite their similar appearance, mulberries are not closely related to raspberries or blackberries. All three species belong to the Rosales order. But while the mulberry is a tree belonging to the Moraceae family (also including the fig, jackfruit, and other fruits), raspberries and blackberries are brambles and belong to the Rosaceae family.
Description
Mulberries are fast-growing when young, and can grow to tall. The leaves are alternately arranged, simple, and often lobed and serrated on the margin. Lobes are more common on juvenile shoots than on mature trees. The trees can be monoecious or dioecious.
The mulberry fruit is a multiple, about long. Immature fruits are white, green, or pale yellow. The fruit turns from pink to red while ripening, then dark purple or black, and has a sweet flavor when fully ripe.
Taxonomy
The taxonomy of Morus is complex and disputed. Fossils of Morus appear in the Pliocene record of the Netherlands. Over 150 species names have been published, and although differing sources may cite different selections of accepted names, less than 20 are accepted by the vast majority of botanical authorities. Morus classification is even further complicated by widespread hybridisation, wherein the hybrids are fertile.
The following species are accepted:
Morus alba L. – white mulberry (China, Korea, Japan)
Morus australis Poir. – East and South-East Asia
Morus boninensis Koidz.
Morus cathayana Hemsl. – China, Japan, Korea
Morus celtidifolia Kunth – Texas mulberry (southwestern United States, Mexico, Central America, South America)
Morus indica L. – India, Southeast Asia
Morus koordersiana J.-F.Leroy
Morus liboensis S.S.Chang – Guizhou Province in China
Morus macroura Miq. – long mulberry (Tibet, Himalayas, Indochina)
Morus microphylla
Morus miyabeana Hotta
Morus mongolica (Bureau) C.K.Schneid.
Morus nigra L. - black mulberry (Iran, Caucasus, Levant)
Morus notabilis C.K.Schneid. – Yunnan and Sichuan Provinces in China
Morus rubra L. – red mulberry (eastern North America)
Morus serrata Roxb. – Tibet, Nepal, northwestern India
Morus trilobata (S.S.Chang) Z.Y.Cao – Guizhou Province in China
Morus wittiorum Hand.-Mazz. – southern China
Distribution
Black, red, and white mulberries are widespread in Southern Europe, the Middle East, Northern Africa, and the Indian subcontinent, where the tree and the fruit have names under regional dialects.
Black mulberry was imported to Britain in the 17th century in the hopes that it would be useful in the cultivation of silkworms. It was much used in folk medicine, especially in the treatment of ringworms.
The United States has native red mulberries, as well as imported black and white mulberries.
Mulberries are also widespread in Greece, particularly in the Peloponnese, which in the Middle Ages was known as Morea, deriving from the Greek word for the tree (, ).
Australia has two types of native mulberries: Hedycarya angustifolia, and Pipturus argenteus, which are both from different families to Moraceae. However, imported black, red and white Morus mulberries are also commonly grown in Australian backyards.
Cultivation
Mulberries can be grown from seed, and this is often advised, as seedling-grown trees are generally of better shape and health. Mulberry trees grown from seed can take up to ten years to bear fruit. Mulberries are most often planted from large cuttings, which root readily. The mulberry plants allowed to grow tall have a crown height of from ground level and a stem girth of . They are specially raised with the help of well-grown saplings 8–10 months old of any of the varieties recommended for rainfed areas like S-13 (for red loamy soil) or S-34 (black cotton soil), which are tolerant to drought or soil-moisture stress conditions. Usually, the plantation is raised and in block formation with a spacing of , or , as plant-to-plant and row-to-row distances. The plants are usually pruned once a year during the monsoon season to a height of and allowed to grow with a maximum of 8–10 shoots at the crown. The leaves are harvested three or four times a year by a leaf-picking method under rain-fed or semi-arid conditions, depending on the monsoon. The tree branches pruned during the fall season (after the leaves have fallen) are cut and are used to make durable baskets supporting agriculture and animal husbandry.
Some North American cities have banned the planting of mulberries because of the large amounts of pollen they produce, posing a potential health hazard for some pollen allergy sufferers. Actually, only the male mulberry trees produce pollen; this lightweight pollen can be inhaled deeply into the lungs, sometimes triggering asthma. Conversely, female mulberry trees produce all-female flowers, which draw pollen and dust from the air. Because of this pollen-absorbing feature, all-female mulberry trees have an OPALS allergy scale rating of just 1 (lowest level of allergy potential), and some consider it "allergy-free".
Mulberry tree scion wood can easily be grafted onto other mulberry trees during the winter, when the tree is dormant. One common scenario is converting a problematic male mulberry tree to an allergy-free female tree, by grafting all-female mulberry tree scions to a male mulberry that has been pruned back to the trunk. However, any new growth from below the graft(s) must be removed, as they would be from the original male mulberry tree.
Toxicity
All parts of the plant besides the ripe fruit contain a toxic milky sap. Eating too many berries may have a laxative effect. Additionally, unripe green fruit may cause nausea, cramps, and a hallucinogenic effect.
Uses
Nutrition
Raw mulberries are 88% water, 10% carbohydrates, 1% protein, and less than 1% fat. In a reference amount, raw mulberries provide 43 calories, 44% of the Daily Value (DV) for vitamin C, and 14% of the DV for iron; other micronutrients are insignificant in quantity.
Culinary
As the fruit matures, mulberries change in texture and color, becoming succulent, plump, and juicy, resembling a blackberry. The color of the fruit does not distinguish the mulberry species, as mulberries may be white, lavender or black in color. White mulberry fruits are typically sweet, but not tart, while red mulberries are usually deep red, sweet, and juicy. Black mulberries are large and juicy, with balanced sweetness and tartness.
The fruit of the East Asian white mulberry – a species extensively naturalized in urban regions of eastern North America – has a different flavor, sometimes characterized as refreshing and a little tart, with a bit of gumminess to it and a hint of vanilla. In North America, the white mulberry is considered an invasive exotic and has taken over extensive tracts from native plant species, including the red mulberry.
Mulberries are used in pies, tarts, wines, cordials, and herbal teas. The fruit of the black mulberry (native to southwest Asia) and the red mulberry (native to eastern North America) have distinct flavors likened to 'fireworks in the mouth'. Jams and sherbets are often made from the fruit in the Old World.
The tender twigs are semisweet and can be eaten raw or cooked.
Supplement
The fruit and leaves are sold in various forms as dietary supplements.
Silk industry
Mulberry leaves, particularly those of the white mulberry, are ecologically important as the sole food source of the silkworm (Bombyx mori, named after the mulberry genus Morus), the cocoon of which is used to make silk. The wild silk moth also eats mulberry. Other Lepidoptera larvae—which include the common emerald, lime hawk-moth, sycamore moth, and fall webworm—also eat the plant.
The Ancient Greeks and Romans cultivated the mulberry for silkworms; at least as early as 220 AD, Emperor Elagabalus wore a silk robe. English clergy wore silk vestments from about 1500 onwards. Mulberry and the silk industry played a role in colonial Virginia.
Pigment
Mulberry fruit color derives from anthocyanins, which have unknown effects in humans. Anthocyanins are responsible for the attractive colors of fresh plant foods, including orange, red, purple, black, and blue. These colors are water-soluble and easily extractable, yielding natural food colorants. Due to a growing demand for natural food colorants, they have numerous applications in the food industry.
A cheap and industrially feasible method has been developed to extract anthocyanins from mulberry fruit that could be used as a fabric dye or food colorant of high color value. Scientists found that, of 31 Chinese mulberry cultivars tested, the total anthocyanin yield varied from 148 to 2725 mg/L of fruit juice. Sugars, acids, and vitamins of the fruit remained intact in the residual juice after removal of the anthocyanins, indicating that the juice may be used for other food products.
Mulberry germplasm resources may be used for:
exploration and collection of fruit yielding mulberry species
their characterization, cataloging, and evaluation for anthocyanin content by using traditional, as well as modern, means and biotechnology tools
developing an information system about these cultivars and varieties
training and global coordination of genetic stocks
evolving suitable breeding strategies to improve the anthocyanin content in potential breeds by collaboration with various research stations in the field of sericulture, plant genetics, and breeding, biotechnology and pharmacology
Paper
During the Angkorian age of the Khmer Empire of Southeast Asia, monks at Buddhist temples made paper from the bark of mulberry trees. The paper was used to make books, known as kraing.
Tengujo is the thinnest paper in the world. It is produced in Japan and made with kozo (stems of mulberry trees). Traditional Japanese washi paper is often created from parts of the mulberry tree.
Wood
The wood of mulberry trees is used for barrel aging of Țuică, a traditional Romanian plum brandy.
Other
According to 1 Maccabees, the Seleucids used the "blood of grapes and mulberries" to provoke their war elephants in preparation for battle against Jewish rebels.
Culture
A Babylonian etiological myth, which Ovid incorporated in his Metamorphoses, attributes the reddish-purple color of the mulberry fruits to the tragic deaths of the lovers Pyramus and Thisbe. Meeting under a mulberry tree (probably the native Morus nigra), Thisbe commits suicide by sword after Pyramus does the same, he having believed, on finding her bloodstained cloak, that she was killed by a lion. Their splashed blood stained the previously white fruit, and the gods forever changed the mulberry's colour to honour their forbidden love.
The nursery rhyme "Here We Go Round the Mulberry Bush" uses the tree in the refrain, as do some contemporary American versions of the nursery rhyme "Pop Goes the Weasel".
Vincent van Gogh featured the mulberry tree in some of his paintings, notably Mulberry Tree (, 1889, now in Pasadena's Norton Simon Museum). He painted it after a stay at an asylum, and he considered it a technical success.
| Biology and health sciences | Rosales | null |
168033 | https://en.wikipedia.org/wiki/Sassafras | Sassafras | Sassafras is a genus of three extant and one extinct species of deciduous trees in the family Lauraceae, native to eastern North America and eastern Asia. The genus is distinguished by its aromatic properties, which have made the tree useful to humans.
Description
Sassafras trees grow from tall with many slender sympodial branches and smooth, orange-brown bark or yellow bark. All parts of the plants are fragrant. The species are unusual in having three distinct leaf patterns on the same plant: unlobed oval, bilobed (mitten-shaped), and trilobed (three-pronged); the leaves are hardly ever five-lobed. Three-lobed leaves are more common in Sassafras tzumu and S. randaiense than in their North American counterparts, although three-lobed leaves often occur on S. albidum. The young leaves and twigs are quite mucilaginous and produce a citrus-like scent when crushed. The tiny, yellow flowers are generally six-petaled; S. albidum and (the extinct) S. hesperia are dioecious, with male and female flowers on separate trees, while S. tzumu and S. randaiense have male and female flowers occurring on the same trees. The fruit is a drupe, blue-black when ripe.
The largest known sassafras tree in the world is in Owensboro, Kentucky, and is over high and in circumference.
Taxonomy
The genus Sassafras was first described by the Bohemian botanist Jan Presl in 1825. The name "sassafras", applied by the botanist Nicolas Monardes in 1569, comes from the French . Some sources claim it originates from the Latin or : "stone-breaking"; "rock" + "to break"). Sassafras trees are not within the family Saxifragaceae.
Early European colonists reported that the plant was called winauk by Native Americans in Delaware and Virginia and pauane by the Timucua. Native Americans distinguished between white sassafras and red sassafras, terms which referred to different parts of the same plant but with distinct colors and uses. Sassafras was known as fennel wood (German ) due to its distinctive aroma.
Species
The genus Sassafras includes four species, three extant and one extinct. Sassafras plants are endemic to North America and East Asia, with two species in each region that are distinguished by some important characteristics, including the frequency of three-lobed leaves (more frequent in East Asian species) and aspects of their sexual reproduction (North American species being dioecious).
Taiwanese sassafras, Taiwan, is treated by some botanists in a distinct genus as Yushunia randaiensis (Hayata) Kamikoti, though this is not supported by recent genetic evidence, which shows Sassafras to be monophyletic.
North America
Sassafras albidum (Nuttall) Nees – sassafras, white sassafras, red sassafras, or silky sassafras, eastern North America, from southernmost Ontario, Canada through the eastern United States, south to central Florida, and west to southern Iowa and East Texas, formerly, Wisconsin
†Sassafras hesperia (Berry) – western North American, from the Eocene Klondike Mountain Formation of Washington and British Columbia; extinct, known only from fossils.
East Asia
Sassafras tzumu (Hemsl.) Hemsl. – Chinese sassafras or tzumu, central and southwestern China
Sassafras randaiense (Hayata) Rehd. – Taiwan
Distribution and habitat
Many Lauraceae are aromatic, evergreen trees or shrubs adapted to high rainfall and humidity, but the genus Sassafras is deciduous. Deciduous sassafras trees lose all of their leaves for part of the year, depending on variations in rainfall. In deciduous tropical Lauraceae, leaf loss coincides with the dry season in tropical, subtropical and arid regions.
Sassafras is commonly found in open woods, along fences, or in fields. It grows well in moist, well-drained, or sandy loam soils and tolerates a variety of soil types, attaining a maximum in southern and wetter areas of distribution.
Sassafras albidum ranges from southern Maine and southern Ontario west to Iowa, and south to central Florida and eastern Texas, in North America. S. tzumu may be found in Anhui, Fujian, Guangdong, Guangxi, Guizhou, Hubei, Hunan, Jiangsu, Sichuan, Yunnan, and Zhejiang, China. S. randaiense is native to Taiwan.
Ecology
The leaves, bark, twigs, stems, and fruits are eaten by birds and mammals in small quantities. For most animals, sassafras is not consumed in large enough quantities to be important, although it is an important deer food in some areas. Carey and Gill rate its value to wildlife as fair, their lowest rating. Sassafras leaves and twigs are consumed by white-tailed deer and porcupines. Other sassafras leaf browsers include groundhogs, marsh rabbits, and American black bears. Rabbits eat sassafras bark in winter. American beavers will cut sassafras stems. Sassafras fruits are eaten by many species of birds, including bobwhite quail, eastern kingbirds, great crested flycatchers, phoebes, wild turkeys, gray catbirds, northern flickers, pileated woodpeckers, downy woodpeckers, thrushes, vireos, and northern mockingbirds. Some small mammals also consume sassafras fruits.
Toxicity
Sassafras oil contains safrole, which may have a carcinogenic effect.
Uses
All parts of sassafras plants, including roots, stems, twig leaves, bark, flowers, and fruit, have been used for culinary, medicinal, and aromatic purposes, both in areas where they are endemic and in areas where they were imported, such as Europe. The wood of sassafras trees has been used as a material for building ships and furniture in China, Europe, and the United States, and sassafras played an important role in the history of the European colonization of the Americas in the 16th and 17th centuries. Sassafras twigs have been used as toothbrushes and fire starters.
Culinary
Sassafras albidum is an important ingredient in some distinct foods of the US. It has been the main ingredient in traditional root beers and sassafras root teas, and the ground leaves of sassafras are a distinctive additive in Louisiana's Cajun cuisine. Sassafras is used in filé powder, a common thickening and flavoring agent in Louisiana gumbo. Methods of cooking with sassafras combine this ingredient native to America with traditional North American and European culinary techniques; they contribute to the unique Creole cuisine, which is heavily influenced by the blend of cultures in Louisiana and other states along the Gulf coast.
Sassafras, once a key ingredient in commercial American root beers, is no longer used, as its oil was banned in 1960 by the US Food and Drug Administration (FDA) in all commercially mass-produced foods and medications. The FDA's directive was in response to health concerns about the carcinogenicity of safrole, a major constituent of sassafras oil, in animal studies.
Sassafras leaves and flowers have also been used in salads, and to flavor fats or cure meats. The young twigs can also be eaten fresh or dried. Additionally, the subterranean portion of the plant can be peeled, dried and boiled to make tea.
Traditional medicine
Numerous Native American tribes used the leaves of sassafras to treat wounds by rubbing the leaves directly into a wound and used different parts of the plant for many medicinal purposes such as treating acne, urinary disorders, and sicknesses that increased body temperature, such as high fevers. East Asian types of sassafras such as S. tzumu (chu mu) and S. randaiense (chu shu) are used in Chinese medicine to treat rheumatism and trauma. Some modern researchers conclude that the oil, roots and bark of sassafras have analgesic and antiseptic properties. Different parts of the sassafras plant (including the leaves and stems, the bark, and the roots) have been used to treat scurvy, skin sores, kidney problems, toothaches, rheumatism, swelling, menstrual disorders, sexually transmitted diseases, bronchitis, hypertension, and dysentery. It is also used as a fungicide, dentifrice, rubefacient, diaphoretic, perfume, carminative and sudorific. Before the twentieth century, Sassafras enjoyed a great reputation in the medical literature, but became valued for its power to improve the flavor of other medicines.
Sassafras root was an early export from North America, as early as 1609.
Sassafras wood and oil were both used in dentistry. Early toothbrushes were crafted from sassafras twigs or wood because of its aromatic properties. Sassafras was also used as an early dental anesthetic and disinfectant.
Wood
Sassafras albidum is often grown as an ornamental tree for its unusual leaves and aromatic scent. Outside of its native area, it is occasionally cultivated in Europe and elsewhere. The durable and beautiful wood of sassafras plants has been used in shipbuilding and furniture-making in North America, in Asia, and in Europe (once Europeans were introduced to the plant). Sassafras wood was also used by Native Americans in the southeastern United States as a fire-starter because of the flammability of its natural oils found within the wood and the leaves.
Oil and aroma
Steam distillation of dried root bark produces an essential oil which has a high safrole content, as well as significant amounts of varying other chemicals such as camphor, eugenol (including 5-methoxyeugenol), asarone, and various sesquiterpenes. Many other trees contain similarly high percentages and their extracted oils are sometimes referred to as sassafras oil, which once was extensively used as a fragrance in perfumes and soaps, food and for aromatherapy. Safrole is a precursor for the clandestine manufacture of the drugs MDA and MDMA, and as such, sales and import of sassafras oil (as a safrole-containing mixture of above-threshold concentration) are heavily restricted in the US.
Sassafras oil has also been used as a natural insect or pest deterrent, and in liqueurs (such as the opium-based Godfrey's Cordial), and in homemade liquor to mask strong or unpleasant smells. Sassafras oil has also been added to soap and other toiletries. It is banned in the United States for use in commercially mass-produced foods and drugs by the FDA as a potential carcinogen.
Commercial use
For a more detailed description of uses by indigenous peoples of North America, and a history of the commercial use of Sassafras albidum by Europeans in the United States in the 16th and 17th centuries, see the article on the extant North American species of sassafras, Sassafras albidum.
In modern times, the sassafras plant has been grown and harvested for the extraction of sassafras oil. It is used in a variety of commercial products or their syntheses, such as the insecticide synergistic compound piperonyl butoxide. These plants are primarily harvested for commercial purposes in Asia and Brazil.
| Biology and health sciences | Laurales | Plants |
168037 | https://en.wikipedia.org/wiki/Station%20wagon | Station wagon | A station wagon (US, also wagon) or estate car (UK, also estate) is an automotive body-style variant of a sedan with its roof extended rearward over a shared passenger/cargo volume with access at the back via a third or fifth door (the liftgate, or tailgate), instead of a trunk/boot lid. The body style transforms a standard three-box design into a two-box design—to include an A, B, and C-pillar, as well as a D-pillar. Station wagons can flexibly reconfigure their interior volume via fold-down rear seats to prioritize either passenger or cargo volume.
The American Heritage Dictionary defines a station wagon as "an automobile with one or more rows of folding or removable seats behind the driver and no luggage compartment but an area behind the seats into which suitcases, parcels, etc., can be loaded through a tailgate."
When a model range includes multiple body styles, such as sedan, hatchback, and station wagon, the models typically share their platform, drivetrain, and bodywork forward of the A-pillar, and usually the B-pillar. In 1969, Popular Mechanics said, "Station wagon-style ... follows that of the production sedan of which it is the counterpart. Most are on the same wheelbase, offer the same transmission and engine options, and the same comfort and convenience options."
Station wagons have evolved from their early use as specialized vehicles to carry people and luggage to and from a train station. The demand for station wagon body style has faded since the 2010s in favor of the crossover or SUV designs.
Name
Reflecting the original purpose of transporting people and luggage between country estates and train stations, the station wagon body style is called an "estate car" or "estate" in the United Kingdom or a "wagon" in Australia and New Zealand.
Either horse-drawn or automotive, the earliest use of the station wagon description would be considered to describe utility vehicles or light trucks. The depot hackney or taxi, often on a Model T chassis with an exposed wood body, most often found around railroad stations, was the predecessor of the station wagon body style in the United States. These early models with exposed wooden bodies became known as woodies. By the 1920s the status of the station wagon description changed to consider them as vehicles for passengers.
In Germany, the term "Kombi" is used, which is short for Kombinationskraftwagen ("combination motor vehicle"). "Kombi" is also the term used in Poland.
In Russia and some Post-Soviet countries, this type of car is called "universal".
Manufacturers may designate station wagons across various model lines with a proprietary nameplate for marketing and advertising differentiation. Examples include "Avant", "Break", "Caravan", "Kombi", "Sports Tourer", "Sports Wagon", "Tourer", "Touring", and "Variant".
Design characteristics
Comparison with hatchbacks
Station wagons and hatchbacks have in common a two-box design configuration, a shared interior volume for passengers and cargo as well as a hatch or rear door (often called a tailgate in the case of a station wagon) that is hinged at roof level. Folding rear seats designed to provide a larger space for cargo in place of passenger capacity, are also typical features for station wagons and hatchbacks.
Distinguishing features between hatchbacks and station wagons include:
D-pillar: Station wagons are more likely to have a D-pillar (hatchbacks and station wagons both have A-, B-, and C-pillars).
Cargo volume: Station wagons prioritize passenger and cargo volume—with windows beside the cargo volume. Of the two body styles, a station wagon roof (viewed in profile) more likely extends to the very rearmost of the vehicle, enclosing a full-height cargo volume—a hatchback design (especially a liftback version) is likely to have steeply sloping roofline behind the B- or C-Pillar, prioritizing style over interior volume or cargo capacity, sometimes having a shorter rear overhang and smaller side windows (or no windows at all).
Other differences are more variable and can potentially include:
Cargo floor contour: A station wagon often has a fold-flat floor (for increased cargo capacity), whereas a hatchback is more likely to have a cargo floor with a pronounced contour.
Seating: Some station wagons have three rows of seats, whereas a hatchback will have two at most. The rearmost row of seating in a station wagon is often located in the cargo area and can be front-facing, rear-facing, or side-facing.
Rear suspension: A station wagon may include a reconfigured rear suspension for additional load capacity and to minimize intrusion in the cargo volume.
Rear Door: Hatchbacks usually feature a top-hinged liftgate for cargo access, with variations ranging from a two-part liftgate to a complex tailgate that can function as a full tailgate or a trunk lid. Station wagons have also been equipped with numerous tailgate configurations. Hatchbacks may be called Liftbacks when the opening area is very sloped, and the door is lifted to open. A design director from General Motors has described the difference as "Where you break the roofline, at what angle, defines the spirit of the vehicle", he said. "You could have a 90-degree break in the back and have a station wagon."
It has become common for station wagons to use a platform shared with other body styles, resulting in many shared components (such as chassis, engine, transmission, bodywork forward of the A-pillar, interior features, and optional features) being used for the wagon, sedan, and hatchback variants of the model range.
Tailgate designs
Many modern station wagons have an upward-swinging, full-width, full-height rear door supported on gas springs—often where the rear window can swing up independently. A variety of other designs have been employed in the past.
Split gate
The split gate features an upward-swinging window and a downward-swinging tailgate, both manually operated. This configuration was typical in the 1920s through the 1940s, and remained common on many models into the 1960s.
Retractable window
In the early 1950s, tailgates with hand-cranked roll-down rear windows began to appear. Later in the decade, electric power was applied to the tailgate window so it could be operated from the driver's seat and by a key-activated switch in the tailgate. By the early 1970s, this arrangement was available on full-size, intermediate, and compact wagons. The lowered bottom hinged tailgate extended the cargo area floor and could serve as a picnic table for "tailgating."
Side hinge: A side-hinged tailgate that opened like a door was offered on some three-seat station wagons to make it easier for the back-row passengers to enter and exit their rear-facing seats.
Retractable roof
A station wagon design featuring a retractable rear roof section and a conventional rear tailgate with a window that rolled down and the gate opened down. The sliding roof section allowed the carrying of tall objects in the rear cargo area. This configuration appeared on the 1963–1966 Studebaker Wagonaire station wagon and the 1998-2009 GMC Envoy XUV SUV model.
Dual and tri-operating gates
In the United States, Ford's full-size station wagons for 1966 introduced a system marketed as "Magic Doorgate"—a conventional tailgate with retracting rear glass, where the tailgate could either fold down or pivot open on a side hinge—with the rear window retracted in either case. Competitors marketed their versions as a Drop and Swing or Dual Action Tailgate. For 1969, Ford incorporated a design that allowed the rear glass to remain up or down when the door pivoted open on its side hinge, marketing the system, engineered by Donald N. Frey as the "Three-Way Magic Doorgate".
Similar configurations became the standard feature on full-size and intermediate station wagons from General Motors, Ford, Chrysler, and American Motors Corporation (AMC). Some full-size GM wagons added a notch in the rear bumper that acted as a step plate; a small portion of the bumper was attached to the tailgate to fill the gap. When opened as a swinging door, this part of the bumper moved away, allowing the depression in the bumper to provide a "step" to ease entry; when the gate was opened by being lowered or raised to a closed position, the chrome section remained in place making the bumper "whole".
Clamshell
Full-size General Motors, from 1971 through 1976 station wagons (Chevrolet Kingswood, Townsman, Brookwood, Bel Air, Impala, and Caprice Estates; Pontiac Safari and Grand Safari; Oldsmobile Custom Cruiser, and the Buick Estate models) featured a 'clam shell' design marketed as the Glide-away tailgate, also called a "disappearing" tailgate because when open, the tailgate was entirely out of view. On the clamshell design, the rear power-operated glass slid up into the roof and the lower tailgate (with either manual or optional power operation), lowered below the load floor. Manually operated types included a lower tailgate counterbalanced by a torque rod similar to the torque rods used in holding a trunk lid open. It required a push to lower the gate. Raising it required a pull on a handhold integral to the top edge of the retractable gate. Power-assisted operation of both the upper glass and lower tailgate became standard equipment in later model years. Station wagons with this design were available with an optional third row of forward-facing seats accessed by the rear side doors and a folding second-row seat. They could accommodate sheets of plywood or other panels with the rear seats folded. The clamshell design required no increased footprint or operational area to open the cargo area. This enabled access even if the station wagon's rear was parked against a wall.
The GM design, as used in a Pontiac Grand Safari, with a forward-facing third-row seat and the clamshell tailgate, was less popular with consumers and was described as the "least convenient of all wagon arrangements" with difficult passenger egress and problematic tailgate operation in comparison to the 1974 AMC Ambassador, Dodge Monaco, and Mercury Colony Park, full-size station wagons conducted by Popular Science magazine.
Subsequent GM full-size wagons reverted to the door/gate system for its full-size wagons.
Lift-gate
A simplified, one-piece lift-gate on smaller wagons. The AMC Hornet Sportabout was introduced for the 1972 model year and featured a "liftgate-style hatchback instead of swing-out or fold-down tailgate ... would set a precedent for liftgates in modern SUVs." The 1978-1996 GM's mid-size station wagons also returned to the upward-lifting rear window/gate as had been used in the 1940s.
Swing-up window: An upward-lifting, full-height, full-width rear door, where the window on the rear door can be opened independently from the rear door itself. The window is also opened upwards and is held on pneumatic struts. The Renault Laguna II station wagon and Ford Taurus wagon featured this arrangement.
Fold-up license plate: Wagons (including the Volvo Amazon wagon, early models of the Range Rover, and the Subaru Baja) had an upward folding hinged license plate attached to the lower tailgate of the split rear door. When the tailgate was folded down, the plate hung down and remained readable. The wagon versions of the Citroën DS, called the Break, Familiale, or Safari, had a different solution: two number plates were fitted to the tailgate at right angles to each other so one would be visible in either position.
Safety equipment
Cargo barriers may be used to prevent unsecured cargo from causing injuries in the event of sudden deceleration, collision, or a rollover.
Performance models
Performance models of station wagons have included the 1970 Ford Falcon (XY) 'Grand Sport' pack, the 1973 Chevrolet Chevelle Malibu SS-454 and the 1992 BMW M5 (E34).
The 1994 Audi RS2, developed with Porsche, has been described as the world's first performance station wagon. This was followed by the Audi RS4 and Audi RS6.
The 2006 through 2008 Dodge Magnum SRT-8 model brought power and performance with station wagon features. The cars featured a 6.1 L Hemi V8 engine rated at . The Dodge Magnum SRT-8 shared its platform with the Chrysler 300C Touring SRT-8, which was only sold in Europe.
Other German manufacturers have produced station wagon versions of their performance models, such as the Mercedes-AMG C63, Mercedes-AMG E63, BMW M5 (E60/E61), Volkswagen Golf R and Volkswagen Passat R36 wagons.
The Cadillac CTS-V Wagon introduced for the 2011 model year was considered the most potent production station wagon offered with a manual transmission, and the Corvette-engined version continued until 2014.
History by country
United States
1910 to 1940: Origins and woodie wagons
The first station wagons were built in around 1910 by independent manufacturers producing wooden custom bodies for the Ford Model T chassis. They were initially called "depot hacks" because they worked around train depots as hacks (short for hackney carriage, as taxicabs were then known). They also came to be known as "carryalls" and "suburbans". Station wagons were initially considered commercial vehicles (rather than consumer automobiles) and the framing of the early station wagons was left unfinished, due to the commercial nature of the vehicles. Early station wagons were fixed-roof vehicles, but lacked the sides and glass that would generally enclose the passenger compartment, and included rudimentary benches for seating passengers. Instead of framed glass, side curtains of canvas could be unrolled. More rigid curtains could be snapped to protect passengers from outside elements. The roofs of "woodie" wagons were usually made of stretched canvas treated with a waterproofing dressing. The framing of the wooden bodies was sheathed in steel and coated with tinted lacquer for protection. These wooden bodies required constant maintenance: varnishes required re-coating, and expansion/contraction of the wood meant that bolts and screws needed periodic re-tightening.
Manufacture of the wooden bodies was initially outsourced to custom coachbuilders, because the production of the all-wood bodies was very time-consuming. Eventually, car manufacturers began producing their station wagon designs. In 1922, the Essex Closed Coach became the first mass-produced car to use a steel body (in this case, a fully enclosed sedan body style). In 1923, Star (a division of Durant Motors) became the first car company to offer a station wagon assembled on its production line (using a wooden wagon body shipped in from an outside supplier). One of the first builders of wagon bodies was the Stoughton Wagon Company from Wisconsin, which began putting custom wagon bodies on the Ford Model T chassis in 1919 and by 1929 the Ford Motor Company was the biggest producer of chassis' for station wagons. Since Ford owned its own hardwood forest and mills (at the Ford Iron Mountain Plant in what is today Kingsford, Michigan in Michigan's Upper Peninsula) it began supplying the wood components for the Model A station wagon. Also in 1929, J.T. Cantrell began supplying woodie bodies for Chrysler vehicles, which continued until 1931.
By the 1930s, station wagons had become expensive and well-equipped vehicles. When it was introduced in 1941, the Chrysler Town & Country was the most expensive car in the company's model range. The first all-steel station wagon body style was the 1935 Chevrolet Suburban. As part of the overall trend in the automotive industry, wooden bodies were superseded by all-steel bodies due to their strength, cost, and durability. The commercial vehicle status was also reflected on those vehicles' registrations For example, there were special "Suburban" license plates in Pennsylvania used well into the 1960s, long after station wagons became car-based.
1945 to 1970: Steel-bodied station wagons
The first all-steel station wagon was the 1935 Chevrolet Suburban, which was built on the chassis of a panel truck. However, most station wagons were produced with wooden bodies until after World War II.
When automobile production resumed after World War II, technological advances made all-steel station wagon bodies more practical, eliminating the cost, noise, and maintenance associated with wood bodies. The first mass-produced steel-bodied station wagon was the 1946 Willys Station Wagon, based on the chassis of the Jeep CJ-2A. In 1947, Crosley introduced a steel-bodied station wagon version of the Crosley CC Four.
The first postwar station wagon to be based on a passenger car chassis was the 1949 Plymouth Suburban, which used a two-door body style. Several manufacturers produced steel and wooden-bodied station wagons concurrently for several years. For example, Plymouth continued the production of wooden-bodied station wagons until 1950. The final wooden-bodied station built in the United States was the 1953 Buick Super Estate.
By 1951, most station wagons were being produced with all-steel bodies. Station wagons experienced the highest production levels in the United States from the 1950s through the 1970s as a result of the American Mid-20th century baby boom.
The late 1950s through the mid-1960s was also the period of greatest variation in body styles, with models available without a B-pillar (called hardtop or pillarless models) or with a B-pillar, both in 2-door and 4-door variants.
The 1956 Rambler was an all-new design, and the 4-door "Cross Country" featured the industry's first station wagon hardtop. However, the pillarless models could be expensive to produce, added wind noise, and created structural issues with body torque. GM eliminated the pillarless wagon from its lineup in 1959, while AMC and Ford exited the field beginning with their 1960 and 1961 vehicles, leaving Chrysler and Dodge with the body style through the 1964 model year.
1970 to 1990: Competition from minivans
The popularity of the station wagon—particularly full-size station wagons—in the United States was blunted by increased fuel prices caused by the 1973 oil crisis. Then, in 1983, the market for station wagons was further eroded by the Chrysler minivans, based on the K platform. While the K platform was also used for station wagon models (such as the Plymouth Reliant and Dodge Aries), the minivan would soon eclipse them in popularity.
The CAFE standards provided an advantage to minivans (and later SUVs) over station wagons because the minivans and SUVs were classified as trucks in the United States and, therefore subject to less stringent fuel economy and emissions regulations. Station wagons remained popular in Europe and in locations where emissions and efficiency regulations did not distinguish between cars and light trucks.
1990 to present: Competition from SUVs
The emergence and popularity of SUVs, which closely approximate the traditional station wagon body style, was a blow. After low sales, the Chevrolet Caprice and the Buick Roadmaster, the last American full-size wagons, were discontinued in 1996. Smaller station wagons were marketed as lower-priced alternatives to SUVs and minivans. Domestic wagons also remained in the Ford, Mercury, and Saturn lines. However, after 2004, these compact station wagons also began to be phased out in the United States. The Ford Taurus wagon was discontinued in 2005, and the Ford Focus station wagon was discontinued in 2008. An exception to this trend was the Subaru Legacy and Subaru Outback station wagon models, which continue to be produced at the Subaru of Indiana plant. With other brands, the niche previously occupied by station wagons is now primarily filled with a similar style of Crossover SUV, which generally has a car underpinning and a wagon body.
Imported station wagons, despite remaining popular in other countries, struggled in the United States. European car manufacturers such as Audi, Volvo, BMW, and Mercedes-Benz continued to offer station wagons in their North American product ranges (marketed using the labels "Avant", "Touring", and "Estate" respectively). However, these vehicles had fewer trim and power train levels than their sedan counterparts. The Mercedes-Benz E63 AMG in Estate trim is a performance station wagon offered in the U.S. market. The station wagon variants of the smaller Mercedes-Benz C-Class line-up were dropped in 2007, and the BMW 5 Series Touring models were discontinued in 2010 due to slow sales in the United States, with only 400 wagons sold in 2009. In 2012, the Volvo V50 compact station wagon was withdrawn from the U.S. market due to poor sales.
The Cadillac CTS gave rise to a station wagon counterpart, the 2010 CTS Sportwagon, which defied the trend by offering almost as many trim levels as its sedan counterpart. The CTS wagon, particularly in the performance CTS-V trim, received positive reviews until it was discontinued in 2014.
In 2011, the Toyota Prius V introduced hybrid power to the compact wagon market, but was discontinued in 2017 to streamline the Toyota hybrid lineup and focus on the RAV4 Hybrid Crossover SUV.
The 2015 VW Golf Sportwagen was marketed as a sub-compact station wagon in the North American market. This model was withdrawn from the U.S. market after 2019.
In 2016, Volvo reintroduced a large wagon to the U.S. market with the Volvo V90, but only by special order.
Simulated wood paneling
As the wooden bodies were replaced by steel from 1945 until 1953, manufacturers applied wooden decorative trim to the steel-bodied wagons as a visual link to the previous wooden style. By the late 1950s, the wooden trim was replaced by "simulated wood" in the form of stick-on vinyl coverings. The woodgrain feature is not that the body is wood—or that it could ever be wood—rather, it is "totally honest in its artificiality."
The design element was also used on cars that were not station wagons, including sedans, pickup trucks, and convertibles.
Unique simulated wood designs included trim on the body pillars of the compact-size Nash Rambler station wagons that went up the roof's drip rail and around on the spit liftgate. The larger-sized Cross Country station wagon was available with bodyside wood trim that went unbroken up the C and D pillars to a thin strip on the roof above the side windows.
Ford marketing began using “Country Squire” with the 1950 model year for the station wagon body design. From 1950 through 1991, their simulated wood trim differentiated the Ford Country Squire station wagon models from the lower trim versions. The "Squire" trim level was an available option in a few different Ford model ranges, including the Falcon Squire, Fairlane Squire, and the 1970s the Pinto Squire. The Squire was the highest trim level of any Ford Wagon and included additional exterior and better interior trims.
Other woodie-style wagon models produced in significant numbers include the 1984 through 1993 Jeep Grand Wagoneer that launched the luxury SUV market segment. Simulated wood-grain trim differentiated the top level models of the 1957-1991 Mercury Colony Park, 1968–1988 Chrysler Town & Country, 1970–1990 Buick Estate, 1971–1992 Oldsmobile Custom Cruiser, and 1969–1972 Chevrolet Kingswood Estate.
Full-size wagons
From the 1950s until the 1990s, many full-size American station wagons could be optioned with a third row of seating in the cargo area (over the rear axle) for a total of nine seats. Before 1956, the third-row seats were forward-facing.
Chrysler's 1957 models had a roof too low to permit a forward-facing seat in the cargo area, so a rear-facing seat was used for the third row.
General Motors adopted the rear-facing third row for most models during 1959-1971 and 1977–1996. However, the 1964–1972 Oldsmobile Vista Cruiser and 1964–1969 Buick Sport Wagon featured raised roof lines beginning above the second-row seat and continuing to the rear tailgate, resulting in the third row of seats being forward-facing. General Motors also used forward-facing seats for the third row from 1971 through 1976 clam shell wagons.
The Ford and Mercury full-size wagons built after 1964 were available with four rows of seats, with the rear two rows in the cargo area facing each other. The third and fourth rows were designed for two people each (although these seats were relatively narrow in later models), giving a total seating capacity of ten people.
The trend since the 1980s for smaller station wagon bodies has limited the seating to two rows, resulting in a total capacity of five people, or six people, if a bench front seat is used. Since the 1990s, full-size station wagons have been largely replaced by SUVs with three-row seating, such as the Chevrolet Suburban, Ford Expedition, Dodge Durango, Land Rover Defender 130 and the Range Rover, Mercedes-Benz GLS-Class, and BMW X7.
Two-door wagons
The first two-door station wagon was the 1946 Willys Jeep Station Wagon. Other early two-door station wagons were the 1951 Nash Rambler and the 1954 Studebaker Conestoga. In 1956, Studebaker introduced three new two-door wagons in Pelham, Parkview, and Pinehurst trims.
General Motors began producing two-door station wagons in 1955 with the "Chevrolet Handyman" and the "Pontiac Chieftain". General Motors also introduced the sportier Chevrolet Nomad and Pontiac Safari to their lineup in 1955. Ford began production of steel-bodied two-door station wagons in 1952 with the Ford Ranch Wagon. In 1956, Ford responded to the Nomad and Safari with the two-door wagon, the Ford Parklane. This was a one-year-only model, succeeded by the Ford Del Rio in 1957.
After the merger of Nash and Hudson, the new company, American Motors (AMC), reintroduced the two-door wagon in the "new" Rambler American line in 1958. It was "recycling" with only a few modifications from the original version and targeted buyers looking for "no-frills" economy. American Motors' strategy of reintroducing an old design made for two distinct model runs, one of few examples where such a strategy has been successful for an automobile manufacturer.
The Chevrolet Vega Kammback, introduced in September 1970, was the first U.S.-made four-seat wagon and the first two-door wagon from GM in six years. It shared its wheelbase and length with Vega coupe versions and was produced in the 1971 through 1977 model years.
American Motors offered a two-door wagon version of the AMC Pacer from 1977 through 1980. The wagon embodied all the features and handling of the coupe, including the wheelbase, while only longer and increasing cargo capacity to with the rear seat down.
The last two-door wagon available marketed in the United States, the Geo Storm was the 1991 and 1992 "Wagonback", featuring a long roof and a rear hatch in place of the sloping liftback versions.
United Kingdom
1930s to 1950s
Early estate cars were after-market conversions, with the new bodywork using a wooden frame and either steel or wooden panels. These wooden-bodied cars, produced until the 1960s, were among the most expensive vehicles. Since the 1930s, the term shooting-brake (originally a term for hunting vehicles) has been an alternative, if now rarely used, to the term for estates in the UK.
Later, estates were produced by vehicle manufacturers and included the 1937 Commer (based on the Hillman Minx Magnificent) designed for "operators requiring reliable light transport units" and the chassis for the Supervan "multipurpose utility vehicle, primarily designed for estate transport ... seating accommodation for five persons and the driver ... being quickly convertible to carry anything from hunting equipment to farm produce." Others included the 1952 Morris Minor Traveller, 1952 Morris Oxford Traveller, 1954 Hillman Husky, 1954 Austin A30 Countryman and 1955 Ford Squire. Most of these models were two-door estates, and several models were built on the chassis of relatively small cars.
Manufacturers often chose a specific model name to apply to all their estate cars as a marketing exercise - for example, Austin used the Countryman name, and Morris called it Traveller. Some estates were closely derived from existing commercial van models, such as the Austin A30/35 Countryman and the Hillman Husky. Others included the Austin Cambridge Countryman and the Standard Ten Companion.
Rover and Austin produced 4×4 canvas-topped utility vehicles in the 1950s that were available in estate body styles sold as "Station Wagons". They incorporated better seating and trim than standard editions with options such as heaters. Early advertising for the Land Rover version took the name literally, showing the vehicle collecting people and goods from a railway station.
Despite the popularity of station wagons in America, estate offerings in the U.K. from Ford and Vauxhall were limited to factory-approved aftermarket conversions of the Ford Consul and Vauxhall Cresta until the factory-built Vauxhall Victor wagon was introduced in 1958.
1960s to 1970s
One of the smallest estates ever produced was the Morris Mini Traveller / Austin Mini Countryman, introduced in 1960.
Ford's first factory-built estate was the 1963 Ford Cortina.
The 1967 Hillman Husky station wagon version of the Hillman Imp was unusual in being a rear-engined estate.
Ford and Vauxhall produced factory-built estate variants of all three of their respective core models (small-, family- and large-size cars) by the 1970s. The FD- and FE-Series Vauxhall Victors, built between 1966 and 1978, were large cars and featured estate models in the style of an American station wagon with front and rear bench seats and large-capacity petrol engines.
Other estates sold in the United Kingdom included the Morris 1100 (introduced in 1966), Vauxhall Viva (introduced 1967), Ford Escort and Squre (introduced in 1968), and Vauxhall Chevette (introduced 1976).
1980s to present
In the decades following, Vauxhall has produced the Astra family car from 1980 continuing till now in estate form, as well as other estate versions of larger cars such as the Cavalier, replaced in 1995 by the Vectra which itself was replaced in 2008 by the Insignia, staying in production till 2022. The second generation Insignia was also made in Country Tourer form, a slightly raised crossover version of the standard Insignia Sports Tourer. Between 1978 and 2003, they also sold estate versions of two executive cars, the Carlton and the Omega. Vauxhall also produced the Signum in the mid-2000s as an executive take on a Vectra estate, and it was only available in such a body style; the Insignia VXR, a high-performance variant of the Insignia available in its first generation could also be had as an estate, with a V6 engine producing 321 bhp.
Ford made a variety of estates, such as the Focus estate from 1998 that replaced the Escort, as well as the estate version of the family car Mondeo (1992-2022, which itself replaced the Sierra's estate variant made by Ford of Britain.
Jaguar produced the X-Type as an estate during the early 2000s, while the larger XF Sportbrake, produced from 2012, and the second generation, were available estate body style. The first generation had a 'floating roof' appearance as its D-pillars were blended with the rear and side windows to make it look like glass. The XFR-S was available with a 5.0 L supercharged V8, while the latter generation's most powerful engine was a 3.0 L supercharged V6.
The Mini Clubman, made from 2007 until 2024 in Oxford, is an estate car made unique by having a split side-opening tailgate across both generations and a shooting brake body style in its first, with a small rearward-opening door on its right-hand side for rear seat access. The second generation was available in the high-performance John Cooper Works trim with up to .
MG marketed the MG5 EV - a rebadged Roewe Ei5, made in China - solely as an estate in the United Kingdom, the first estate since the brand's rebirth. Previously, MG sold the ZT, a badge-engineered Rover 75. This large family car also had the faster ZT-T version, a modified version of which, with over 800 bhp, gained the World's Fastest (non-production) Estate Car title in September 2003, with a top speed of . Before its discontinuation, Rover produced various estate cars: the aforementioned 75, also sold in V8 form, and the Rover 400 in the 1990s.
Germany
Germany is the largest market for station wagons in the world, with around 600,000 to 700,000 vehicles sold each year—amounting to 20% of all car sales. German-designed station wagons have been produced by Audi, BMW, Borgward, Mercedes-Benz, Opel, and Volkswagen. Some larger models are available with a third row of seats, such as the rear-facing jump seat for two passengers in the cargo area of the Mercedes-Benz E-Class wagon.
In 1961, Volkswagen introduced the two-door "Variant" body style of the Volkswagen Type 3 (also known as the Volkswagen 1500—later the Volkswagen 1600). The Type 3's rear-engine layout was retained for the station wagon models, but the engine profile was flattened, resulting in a small car with interior room and trunk space in the front. The model was offered through the 1973 model year.
Station wagons produced in East Germany include the 1956–1965 Wartburg 311/312/313, the 1963–1990 Trabant 601 Universal, and the 1966–1988 Wartburg 353 Tourist.
France
In France, almost all station wagon models are called a "Break".
The first station wagon produced by a French manufacturer was the Citroën Traction Avant Familiale model introduced in 1935. The first Peugeot station wagon was the Peugeot 203, introduced in 1950.
In 1958, the Citroën ID Break (known as the Safari in English-speaking countries) was introduced, larger than other French station wagon models and of similar size to contemporary full-size station wagons from the United States. It seated eight people, with two front-facing bench seats and two folding inward-facing seats in the cargo area. The 'Familiale' version had a front bench seat, a forward-facing three-space bench seat in the middle, and a folding forward-facing three-seat bench in the rear, providing a versatile nine-seat car. The Citroën ID also had a two-part tailgate and a hydropneumatic suspension that allowed a self-leveling ride height and automatic brake biasing regardless of the load carried. The car could also 'kneel' to the ground to facilitate loading heavy or large items. The successors to the ID, the Citroën CX and Citroën XM, continued to be among the largest station wagon cars produced in Europe. Nevertheless, the model was discontinued in 2000, and a station wagon version was unavailable for its Citroën C6 successor.
The Peugeot 404, introduced in 1960, offered a conventional large station wagon alternative to the innovative Citroëns. Its replacement, the 505 was available in both five-seat and seven-seat 'Familiale' versions. As with the Citroëns, changing demands in the French car market led to the end of the large Peugeot station wagon models in the mid-1990s, with the smaller Peugeot 406 becoming the largest station wagon model in the range from 1995. Similarly to the United States, the decline of traditional Break and Familiale models in France was partly due to the introduction of the minivan in the form of the Renault Espace in 1984.
Sweden
The first station wagon produced in Sweden was the Volvo Duett, introduced in 1953. The Duett two-door wagon was conceived as a dual-function delivery van and people-carrier and is based on the chassis of the PV444 and PV544 sedans.
In 1962, the Volvo Duett was supplemented by a larger but lower Amazon, which has a four-door body and a horizontal split tailgate. Volvo continued production of station wagons through the Volvo 145 (introduced in 1967), then the Volvo 200 Series (introduced in 1974), and the Volvo 700 Series (introduced in 1985). In many markets, the station wagon models of the 700 Series significantly outsold the sedan models. In 1990, the 700 Series was replaced by the Volvo 900 Series, which was sold alongside the smaller Volvo 850 wagon that was introduced one year later. The 900 Series ended production in 1998, and its successor (the Volvo S80) did not include any wagon models. Volvo station wagons produced since the mid-1990s are the Volvo V40, Volvo V50, Volvo V60, Volvo V70, and Volvo V90, with the V60 and V90 models currently in production.
Saab began producing station wagons in 1959, with the Saab 95 two-door wagon, based on the Saab 93 sedan. Following a hiatus in station wagon production since the Saab 95 ended production in 1978, in 1997 the company introduced the four-door Saab 9-5 station wagon, produced until 2010. In 2005 a 'Sportwagon' version of the Saab 9-3 was introduced and produced until 2011.
In 2017 station wagons accounted for 31% of all sold cars.
Switzerland
In 1983, station wagons represented 15% of the passenger car market, reflecting a trend throughout Europe of increasing popularity through the 1980s, with the vehicles becoming less cargo-oriented.
Japan
The first Japanese station wagon was the 1961 Isuzu Bellel four-door wagon, based on a compact sedan chassis. This was followed by the 1963 Mazda Familia, 1966 Toyota Corolla, 1967 Isuzu Florian, 1969 Mitsubishi Galant, 1973 Mitsubishi Lancer and 1974 Honda Civic wagons. However, Japanese manufacturers did not build station wagons in large volumes until the 1980s when the body style, along with SUVs and minivans, boomed in popularity as leisure vehicles.
Models marketed as passenger station wagons in export markets were often sold as utilitarian "van" models in the home market. Some were not updated in a model's life in Japan for consecutive generations. For example, a sedan might have a model life of four years, but the wagon was not updated for up to eight years (such as the Toyota Corolla wagon built from 1979 until 1987 and the 1987–1996 Mazda Capella wagon). Station wagons remain popular in Japan, although they are in slow decline as the SUVs and minivans have taken over a large portion of this market since the 2000s, with manufacturers replacing their station wagons with equivalent hatchbacks or crossover SUVs (i.e., Subaru replaced the wagon with the hatchback for their third-generation Impreza range). Several Japanese compact MPVs such as Subaru Exiga and Toyota Prius α take elements from older station wagons while being more in line with their corresponding category.
Korea
South Korean manufacturers do not have a strong tradition of producing station wagons. The first station wagon by the South Korean manufacturer was released in 1995 as the Hyundai Avante Touring (Lantra Sportswagon), followed in early 1996 as the Kia Pride station wagon. Daewoo Motors followed a year later with the first-generation Nubira.
South Korean manufacturer Kia produces both the Cee'd and Optima station wagons designated as Sportswagons with sister company Hyundai offering station wagon versions of the i30 and i40.
Australia
The first Australian-designed car was built in 1948, but locally designed station wagons did not appear until nine years later when the 1957 Holden FE was introduced. Holden's main competitor, the Ford Falcon (XK) introduced wagon models in 1960.
Ford and Holden produced wagon models based on each generation of their large sedans until 2010. Other wagons produced in Australia include the smaller Toyota Camry and Mitsubishi Magna. The Ford and Holden wagons were usually built on a longer wheelbase than their sedan counterparts until the introduction of the Holden Commodore (VE), which switched to sharing the sedan's wheelbase.
Ford ceased production of wagons in Australia when the Ford Falcon (BF) ended production in 2010, primarily due to the declining station wagon and large car market, but also following the 2004 introduction and sales success of the Ford Territory SUV. Production of wagons in Australia ceased in 2017 when the Holden Commodore (VF) ended production.
| Technology | Motorized road transport | null |
168053 | https://en.wikipedia.org/wiki/Amazon%20river%20dolphin | Amazon river dolphin | The Amazon river dolphin (Inia geoffrensis), also known as the boto, bufeo or pink river dolphin, is a species of toothed whale endemic to South America and is classified in the family Iniidae. Three subspecies are currently recognized: I. g. geoffrensis (Amazon river dolphin), I. g. boliviensis (Bolivian river dolphin) and I. g. humboldtiana (Orinoco river dolphin). The position of the Araguaian river dolphin (I. araguaiaensis) within the clade is still unclear. The three subspecies are distributed in the Amazon basin, the upper Madeira River in Bolivia, and the Orinoco basin, respectively.
The Amazon river dolphin is the largest species of river dolphin, with many adult males reaching in weight, and in length. Adults acquire a pink color, more prominent, in males, giving it its nickname "pink river dolphin". Sexual dimorphism is very evident, with males measuring 16% longer and weighing 55% more than females. Like other toothed whales, they have a melon, an organ that is used for bio sonar. The dorsal fin, although short in height, is regarded as long, and the pectoral fins are also large. The fin size, unfused vertebrae, and its relative size allow for improved maneuverability when navigating flooded forests and capturing prey.
They have one of the widest-ranging diets among toothed whales, and feed on up to 53 different species of fish, such as croakers, catfish, tetras and piranhas. They also consume other animals such as river turtles, aquatic frogs, and freshwater crabs.
In 2018, this species was ranked by the International Union for Conservation of Nature (IUCN) as endangered, with a declining population. Threats include incidental catch in fishing lines, direct hunting for use as fishing bait or predator control, damming, and pollution; as with many species, habitat loss and continued human development is becoming a greater threat.
It is the only species of river dolphin kept in captivity, mainly in Venezuela and Europe. It is difficult to train and there is a high mortality rate among captive individuals.
Taxonomy
The species Inia geoffrensis was described by Henri Marie Ducrotay de Blainville in 1817. Originally, the Amazon river dolphin belonged to the superfamily Platanistoidea, which constituted all river dolphins, making them a paraphyletic group. Today, however, the Amazon river dolphin has been reclassified into the superfamily Inioidea. There is no consensus on when and how their ancestral species penetrated the Amazon basin; they may have done so during the Miocene from the Pacific Ocean, before the formation of the Andes, or from the Atlantic Ocean.
There is ongoing debate about the classification of species and subspecies. The IUCN and Committee on Taxonomy of the Society for Marine Mammalogy recognize two subspecies: I. g. geoffrensis (Amazon river dolphin) and I. g. boliviensis (Bolivian river dolphin). A third proposed subspecies, I. g. humboldtiana (Orinoco river dolphin), was first described in 1977. Molecular analysis suggests that the Orinoco dolphins are derived from the Amazon population and are not genetically distinct. Comparative morphological research also indicates that I. g. humboldtiana is not distinguishable from I. g. geoffrensis. However, Cañizales (2020) found that I. g. humboldtiana skulls were morphologically distinct and recommended that it be elevated to species status.
In 1994, it was proposed that I. g. boliviensis was a different species based on skull morphology. In 2002, following the analysis of mitochondrial DNA specimens from the Orinoco basin, the Putumayo River (tributary of the Amazon) and the Tijamuchy and Ipurupuru rivers, geneticists proposed that genus Inia be divided into at least two evolutionary lineages: one restricted to the river basins of Bolivia and the other widely distributed in the Orinoco and Amazon. A recent study, with more comprehensive sampling of the Madeira system, including above and below the Teotonio Rapids (which were thought to obstruct gene flow), found that the Inia above the rapids did not possess unique mtDNA. As such the species level distinction once held is not supported by current results. Therefore, the Bolivian river dolphin is currently recognized as a subspecies. In addition, a 2014 study identifies a third species in the Araguaia-Tocantins basin, but this designation is not recognized by any international organization and the Committee on Taxonomy of the Society for Marine Mammalogy suggests this analysis is not persuasive.
Subspecies
Inia geoffrensis geoffrensis inhabits most of the Amazon River, including rivers Tocantins, Araguaia, low Xingu and Tapajos, the Madeira to the rapids of Porto Velho, and rivers Purus, Yurua, Ica, Caqueta, Branco, and the Rio Negro through the channel of Casiquiare to San Fernando de Atabapo in the Orinoco river, including its tributary: the Guaviare.
Inia geoffrensis boliviensis has populations in the upper reaches of the Madeira River, upstream of the rapids of Teotonio, in Bolivia. It is confined to the Mamore River and its main tributary, the Iténez.
Inia geoffrensis humboldtiana are located in the Orinoco River basin, including the Apure and Meta rivers. This subspecies is restricted, at least during the dry season, to the waterfalls of Rio Negro rapids in the Orinoco between and Puerto Ayacucho, and the Casiquiare canal. This subspecies is not recognized by the Committee on Taxonomy of the Society for Marine Mammalogy, or the IUCN.
Description
The Amazon river dolphin is the largest river dolphin. Adult males reach a maximum length and weight of (average ) and (average ), while females reach a length and weight of (mean ) and (average ). It has very evident sexual dimorphism, with males measuring and weighing between 16% and 55% more than females, making it unique among river dolphins, where females are generally larger than males.
The texture of the body is robust and strong but flexible. Unlike in oceanic dolphins, the cervical vertebrae are not fused, allowing the head to turn 90 degrees. The flukes are broad and triangular, and the dorsal fin, which is keel-shaped, is short in height but very long, extending from the middle of the body to the caudal region. The pectoral fins are large and paddle-shaped. The length of its fins allows the animal to perform a circular movement, allowing for exceptional maneuverability to swim through the flooded forest but decreasing its speed.
The body color varies with age. Newborns and the young have a dark grey tint, which in adolescence transforms into light grey, and in adults turns pink as a result of repeated abrasion of the skin surface. Males tend to be pinker than females due to more frequent trauma from intra-species aggression. The color of adults varies between solid and mottled pink and in some adults the dorsal surface is darker. It is believed that the difference in color depends on the temperature, water transparency, and geographical location. There was one albino on record, kept in Duisburg Zoo's dolphinarium. However, as of 2020, this specimen and one other have died.
The skull of the species is slightly asymmetrical compared to the other toothed whales. It has a long, thin snout, with 25 to 28 pairs of long and slender teeth to each side of both jaws. Dentition is heterodont, meaning that the teeth differ in shape and length, with differing functions for both grabbing and crushing prey. Anterior teeth are conical and later have ridges on the inside of the crown. Despite small eyes, the species seems to have good eyesight in and out of the water. It has a melon on the head, the shape of which can be modified by muscular control when used for biosonar. Breathing takes place every 30 to 110 seconds.
Longevity
Life expectancy of the Amazon river dolphin in the wild is unknown, but in captivity, the longevity of healthy individuals has been recorded at between 10 and 30 years. However, a 1986 study of the average longevity of this species in captivity in the United States is only 33 months. An individual named Baby at the Duisburg Zoo, Germany, lived at least 46 years, spending 45 years, 9 months at the zoo.
Biology and ecology
The Amazon river dolphin are commonly seen singly or in twos, but may also occur in pods that rarely contain more than eight individuals. Pods as large as 37 individuals have been seen in the Amazon, but the average is three. In the Orinoco, the largest observed groups number 30, but the average is just above five. During prey time, as many as 35 pink dolphins work together to obtain their prey. Typically, social bonds occur between mother and child, but may also been seen in heterogeneous groups or bachelor groups. The largest congregations are seen in areas with abundant food, and at the mouths of rivers. There is significant segregation during the rainy season, with males occupying the river channels, while females and their offspring are located in flooded areas. However, in the dry season, there is no such separation. Due to the high level of prey fish, larger group-sizes are seen in large sections that are directly influenced by whitewater (such as main rivers and lakes, especially during low water season) than in smaller sections influenced by blackwater (such as channels and smaller tributaries). In their freshwater habitat they are apex predators and gatherings depend more on food sources and habitat availability than in oceanic dolphins where protection from larger predators is necessary.
Captive studies have shown that the Amazon river dolphin is less shy than the bottlenose dolphin, but also less sociable. It is very curious and has a remarkable lack of fear of foreign objects. However, dolphins in captivity may not show the same behavior that they do in their natural environment, where they have been reported to hold the oars of fishermen, rub against boats, pluck underwater plants, and play with sticks, logs, clay, turtles, snakes, and fish.
They are slow swimmers; they commonly travel at speeds of but have been recorded to swim at speeds up to . When they surface, the tips of the snout, melon and dorsal fins appear simultaneously, the tail rarely showing before diving. They can also shake their fins, and pull their tail fin and head above the water to observe the environment. They occasionally jump out of the water, sometimes as high as a meter (3.14 ft). They are harder to train than most other species of dolphin.
Courtship
Adult males have been observed carrying objects in their mouths such as branches or other floating vegetation, or balls of hardened clay. The males appear to carry these objects as a socio-sexual display which is part of their mating system. The behavior is "triggered by an unusually large number of adult males and/or adult females in a group, or perhaps it attracts such into the group. A plausible explanation of the results is that object-carrying is aimed at females and is stimulated by the number of females in the group, while aggression is aimed at other adult males and is stimulated by object-carrying in the group."
Before determining that the species had an evident sexual dimorphism, it was postulated that the river dolphins were monogamous. Later, it was shown that males were larger than females and are documented wielding an aggressive sexual behavior in the wild and in captivity. Males often have a significant degree of damage in the dorsal, caudal, and pectoral fins, as well as the blowhole, due to bites and abrasions. They also commonly have numerous secondary teeth-raking scars. This suggests fierce competition for access to females, with a polygynous mating system, though polyandry and promiscuity cannot be excluded.
In captivity, courtship and mating foreplay have been documented. The male takes the initiative by nibbling the fins of the female, but reacts aggressively if the female is not receptive. A high frequency of copulations in a couple was observed; they used three different positions: contacting the womb at right angles, lying head to head, or head to tail.
Reproduction
Breeding is seasonal, and births occur between May and June. The period of birthing coincides with the flood season, and this may provide an advantage because the females and their offspring remain in flooded areas longer than males. As the water level begins to decrease, the density of food sources in flooded areas increases due to loss of space, providing enough energy for infants to meet the high demands required for growth. Gestation is estimated to be around eleven months and captive births take 4 to 5 hours. At birth, calves are long and in captivity have registered a growth of per year. Lactation takes about a year. The interval between births is estimated between 15 and 36 months, and the young dolphins are thought to become independent within two to three years.
The relatively long duration of breastfeeding and parenting suggests a high level of parental care. Most couples observed in their natural environment consist of a female and her calf. This suggests that long periods of parental care contribute to learning and development of the young.
Diet
The diet of the Amazon river dolphin is the most diverse of the toothed whales. It consists of at least 53 different species of fish, grouped in 19 families. The prey size is between , with an average of . The most frequently consumed fish belong to the families Sciaenidae (croakers), Cichlidae (cichlids), Characidae (characins and tetras), and Serrasalmidae (pacus, piranhas and silver dollars). The dolphin's dentition allows it to access shells of river turtles (such as Podocnemis sextuberculata) and freshwater crabs (such as Poppiana argentiniana). The diet is more diverse during the wet season, when fish are spread in flooded areas outside riverbeds, thus becoming more difficult to catch. The diet becomes more selective during the dry season when prey density is greater.
Usually, these dolphins are active and feeding throughout the day and night. However, they are predominantly crepuscular. They consume about 5.5% of their body weight per day. They sometimes take advantage of the disturbances made by boats to catch disoriented prey. Sometimes, they associate with the distantly related tucuxi (Sotalia fluviatilis), and giant otters (Pteronura brasiliensis) to hunt in a coordinated manner, by gathering and attacking fish stocks at the same time. Apparently, there is little competition for food between these species, as each prefers different prey. It has also been observed that captive dolphins share food.
Echolocation
Amazonian rivers are often very murky, and the Amazon river dolphin is therefore likely to depend much more on its sense of echolocation than vision when navigating and finding prey. However, echolocation in shallow waters and flooded forests may result in many echoes to keep track of. For each click produced a multitude of echoes are likely to return to the echolocating animal almost on top of each other which makes object discrimination difficult. This may be why the Amazon river dolphin produces less powerful clicks compared to other similar sized toothed whales. By sending out clicks of lower amplitude only nearby objects will cast back detectable echoes, and hence fewer echoes need to be sorted out, but the cost is a reduced biosonar range. Toothed whales generally do not produce a new echolocation click until all relevant echoes from the previous click have been received, so if detectable echoes are only reflected back from nearby objects, the echoes will quickly return, and the Amazon river dolphin is then able to click at a high rate. This in turn allow these animals to have a high acoustic update rate on their surroundings which may aid prey tracking when echolocating in shallow rivers and flooded forests with plenty of hiding places for the prey. While preying in murky water, they emit series of clicking noises, 30 to 80 per second, which they use by listening to the bouncing sonar which bounces off their prey.
Communication
Like other dolphins, river dolphins use whistling tones to communicate. The issuance of these sounds is related to the time they return to the surface before diving, suggesting a link to food. Acoustic analysis revealed that the vocalisations are different in structure from the typical whistles of other species of dolphins.
Distribution and population
Amazon river dolphins are the most widespread river dolphins. They are present in six countries in South America: Bolivia, Brazil, Colombia, Ecuador, Peru, and Venezuela, in an area covering about . The boundaries are set by waterfalls, such as the Xingu and Tapajós rivers in Brazil, as well as very shallow water. A series of rapids and waterfalls in the Madeira River have isolated one population, recognized as the subspecies I. g. boliviensis, in the southern part of the Amazon basin in Bolivia.
They are also distributed in the basin of the Orinoco River, except the Caroni River and the upper Caura River in Venezuela. The only connection between the Orinoco and the Amazon is through the Casiquiare canal. The distribution of dolphins in the rivers and surrounding areas depends on the time of year; in the dry season they are located in the river beds, but in the rainy season, when the rivers overflow, they disperse to the flooded areas, both the forests and the plains.
Studies to estimate the population are difficult to analyze due to the difference in the methodology used. In a study conducted in the stretch of the Amazon called Solimões River, with a length of between the cities of Manaus and Tabatinga, a total of 332 individuals was sighted ± 55 per inspection. Density was estimated at 0.08–0.33 animals per square kilometer in the main channels, and 0.49 to 0.93 animals per square kilometer in the branches. In another study, on a stretch of at the confluence of Colombia, Brazil and Peru, 345 individuals with a density of 4.8 per square km in the tributaries around the islands. 2.7 and 2.0 were observed along the banks. Additionally, another study was conducted in the Amazon at the height of the mouth of the Caqueta River for six days. As a result of the studies conducted, it was found that the density is higher in the riverbanks, 3.7 per km, decreasing towards the center of the river. In studies conducted during the rainy season, the density observed in the flood plain was 18 animals per square km, while on the banks of rivers and lakes ranged from 1.8 to 5.8 individuals per square km. These observations suggest that the Amazon river dolphin is found in higher density than any other cetacean.
Habitat and migration
The Amazon river dolphin is located in most of the area's aquatic habitats, including; river basins, major courses of rivers, canals, river tributaries, lakes, and at the ends of rapids and waterfalls. Cyclical changes in the water levels of rivers take place throughout the year. During the dry season, dolphins occupy the main river channels, and during the rainy season, they can move easily to smaller tributaries, to the forest, and to floodplains.
Males and females appear to have selective habitat preferences, with the males returning to the main river channels when water levels are still high, while the females and their offspring remain in the flooded areas as long as possible; probably because it decreases the risk of aggression by males toward the young and predation by other species.
In the Pacaya-Samiria National Reserve, Peru, photo-identification is used to recognize individuals based on pigmentation patterns, scars and abnormalities in the beak. 72 individuals were recognized, of which 25 were again observed between 1991 and 2000. The intervals between sightings ranged from one day to 7.5 years. The maximum range of motion was , with an average of . The longest distance in one day was , with an average of . In a previous study conducted at the center of the Amazon River, a dolphin was observed that moved only a few dozen kilometers from the dry season and wet season. However, three of the reviewed 160 animals were observed over from where they were first registered. Research in 2011 concluded that photo-identification by skilled operatives using high-quality digital equipment could be a useful tool in monitoring population size, movements and social patterns.
Interactions with humans
In captivity
The Amazon river dolphin has historically been kept in dolphinariums. Today, only one exists in captivity, at Zoologico de Quistochoca in Peru. Several hundred were captured between the 1950s and 1970s, and were distributed in dolphinariums throughout the US, Europe, and Japan. Around 100 went to US dolphinariums, and of that, only 20 survived; the last died at the Pittsburgh Zoo in 2002.
Threats
The region of the Amazon in Brazil has an extension of containing diverse fundamental ecosystems. One of these ecosystems is a floodplain, or a várzea forest, and is home to a large number of fish species which are an essential resource for human consumption. The várzea is also a major source of income through excessive local commercialized fishing. Várzea consists of muddy river waters containing a vast number and diversity of nutrient rich species. The abundance of distinct fish species lures the Amazon River dolphin into the várzea areas of high water occurrences during the seasonal flooding.
In addition to attracting predators such as the Amazon river dolphin, these high-water occurrences are an ideal location to draw in the local fisheries. Human fishing activities directly compete with the dolphins for the same fish species, the tambaqui (Colossoma macropomum) and the pirapitinga (Piaractus brachypomus), resulting in deliberate or unintentional catches of the Amazon river dolphin. The local fishermen overfish and when the Amazon River dolphins remove the commercial catch from the nets and lines, it causes damages to the equipment and the capture, as well as generating ill will from the local fishermen. The negative reactions of the local fishermen are also attributed to the Brazilian Institute of Environment and Renewable Natural Resources prohibition on killing the Amazon river dolphin, yet not compensating the fishermen for the damage done to their equipment and catch.
During the process of catching the commercialized fish, the Amazon river dolphins get caught in the nets and exhaust themselves until they die, or the local fishermen deliberately kill the entangled dolphins. The carcasses are discarded, consumed, or used as bait to attract a scavenger catfish, the piracatinga (Calophysus macropterus). The use of the Amazon river dolphin carcass as bait for the piracatinga dates back to 2000. Increasing demand for the piracatinga has created a market for distribution of the Amazon river dolphin carcasses to be used as bait throughout these regions.
Of the 15 dolphin carcasses found in the Japurá River in 2010–2011 surveys, 73% of the dolphins were killed for bait, disposed of, or abandoned in entangled gillnets. The data do not fully represent the actual overall number of deaths of the Amazon river dolphins, whether accidental or intentional, because a variety of factors make it extremely complicated to record and medically examine all the carcasses. Scavenger species feed upon the carcasses, and the complexity of the river currents make it nearly impossible to locate all of the dead animals. More importantly, the local fishermen do not report these deaths out of fear that a legal course of action will be taken against them, as the Amazon river dolphin and other cetaceans are protected under a Brazilian federal law prohibiting any takes, harassments, and kills of the species.
The Amazon river is also threatened by the dumping of mercury into its waters from industrial mining, along with other harsh chemicals. Just like deforestation and burning, mercury in the water of the Amazon river is very dangerous for the fauna of the river. In 2019, F. Mosquera-Guerra at al, published a study that showed the presence of mercury in the Amazon river dolphins. They analyzed the dolphin's muscle tissue of different taxa of the Amazon basin and found high concentrations of mercury in the Tapajos River (Brazil) from an adult male of I. g. geoffrensis (pink dolphin).
In September 2023, 154 Amazon river dolphins died in Brazil's Lake Tefé following record-high water temperatures of and reduced water levels during a drought. While there are ongoing studies by the Mamirauá Institute for Sustainable Development to determine the cause of the deaths, the leading hypothesis is that the elevated temperature caused algae in the lake to release a toxin that attacks the central nervous system.
Conservation
In 2008, the International Whaling Commission (IWC) expressed concern for captured botos for use as bait in the Central Amazon, which is an emerging problem that has spread on a large scale. The species is listed in Appendix II of the Convention on International Trade in Endangered Species Fauna and Flora (CITES), and Appendix II of the Convention on the Conservation of Migratory Species of Wild Animals, because it has an unfavorable conservation status or would benefit significantly from international co-operation organized by tailored agreements.
According to a previous assessment by the Scientific Committee of the International Whaling Commission in 2000, the population of botos appears great and there is little or no evidence of population decline in numbers and range. However, increased human intervention on their habitat is expected to, in the future, be the most likely cause of the decline of its range and population. A series of recommendations were issued to ensure proper follow-up to the species, among which is the implementation and publication of studies on the structure of populations, making a record of the distribution of the species, information about potential threats as the magnitude of fishing operations and location of pipelines.
In September 2012, Bolivian President Evo Morales enacted a law to protect the dolphin and declared it a national treasure.
In 2018, the species was listed on the Red list of endangered species.
Increasing pollution and gradual destruction of the Amazon rainforest add to the vulnerability of the species. The biggest threats are deforestation and other human activities that contribute to disrupt and alter their environment. Another source of concern is the difficulty in keeping these animals alive in captivity, due to intra-species aggression and low longevity. Captive breeding is not considered a conservation option for this species.
The Global Declaration for River Dolphins seeks to reverse the decline of river dolphin populations throughout the world. As of early 2024, 11 of the 14 countries that have river dolphins have signed the declaration.
In mythology
In traditional Amazon River folklore, at night, an Amazon river dolphin becomes a handsome young man who seduces girls, impregnates them, and then returns to the river in the morning to become a dolphin again. Similarly, the female becomes a beautiful, well-dressed, wealthy-looking young woman. She goes to the house of a married man, places him under a spell to keep him quiet, and takes him to a thatched hut and visits him every year on the same night she seduced him. On the 7th night of visiting, she changes the man into a baby – female or male, and soon transfers it into his own wife's womb. The mythology is said to be the cycle of a baby. This dolphin shapeshifter is called an encantado. The myth has been suggested to have arisen partly because dolphin genitalia bear a resemblance to those of humans. Others believe the myth served (and still serves) as a way of hiding the incestuous relations which are quite common in some small, isolated communities along the river. In the area, tales relate it is bad luck to kill a dolphin. Legend also states that if a person makes eye contact with an Amazon river dolphin, they will have lifelong nightmares. According to the pink Amazon river dolphin myth, it is said that this creature takes form of a human and seduces men and women to the Underworld of Encante. This underworld place is said to be 'Atlantis-like Paradise', yet no one has come back from it alive. Myths say that whoever kills the amazon dolphin will have bad luck, but it's worse for whoever eats it. Local legends also state that the dolphin is the guardian of the Amazonian manatee, and that if one should wish to find a manatee, one must first make peace with the dolphin.
Associated with these legends is the use of various fetishes, such as dried eyeballs and genitalia. These may or may not be accompanied by the intervention of a shaman. A recent study has shown, despite the claim of the seller and the belief of the buyers, none of these fetishes is derived from the boto. They are derived from Sotalia guianensis, are most likely harvested along the coast and the Amazon River delta, and then are traded up the Amazon River. In inland cities far from the coast, many, if not most, of the fetishes are derived from domestic animals such as sheep and pigs.
Gallery
| Biology and health sciences | Toothed whale | Animals |
168072 | https://en.wikipedia.org/wiki/Handcuffs | Handcuffs | Handcuffs are restraint devices designed to secure an individual's wrists in proximity to each other. They comprise two parts, linked together by a chain, a hinge, or rigid bar. Each cuff has a rotating arm which engages with a ratchet that prevents it from being opened once closed around a person's wrist. Without a key, handcuffs cannot be removed without specialist knowledge, and a handcuffed person cannot move their wrists more than a few centimetres or inches apart, making many tasks difficult or impossible.
Handcuffs are frequently used by law enforcement agencies worldwide to prevent suspected criminals from escaping from police custody.
Styles
Metal handcuffs
There are three main types of contemporary metal handcuffs: chain (cuffs are held together by a short chain), hinged (since hinged handcuffs permit less movement than a chain cuff, they are generally considered to be more secure), and rigid solid bar handcuffs. While bulkier to carry, rigid handcuffs permit several variations in cuffing. Hiatts Speedcuffs are rigid handcuffs used by most police forces in the United Kingdom. Both rigid and hinged cuffs can be used one-handed to apply pain-compliance/control techniques that are not workable with the chain type of cuff. Various accessories are available to improve the security or increase the rigidity of handcuffs, including boxes that fit over the chain or hinge and can themselves be locked with a padlock.
In 1933 the Royal Canadian Mounted Police used a type called "Mitten Handcuffs" to prevent criminals from being able to grab an object like the officer's gun. While used by some in law enforcement it was never popular.
Handcuffs may be manufactured from various metals, including carbon steel, stainless steel and aluminium, or from synthetic polymers.
Sometimes two pairs of handcuffs are needed to restrain a person with an exceptionally large waistline because the hands cannot be brought close enough together; in this case, one cuff on one pair of handcuffs is handcuffed to one of the cuffs on the other pair, and then the remaining open handcuff on each pair is applied to the person's wrists. Oversized handcuffs are available from a number of manufacturers.
The National Museum of Australia has a number of handcuffs in its collection dating from the late 19th and early 20th centuries. These include 'T'-type 'Come Along', 'D'-type and 'Figure-8' handcuffs.
Plastic handcuffs
Plastic restraints, known as wrist ties, riot cuffs, plasticuffs, flexicuffs, flex-cuffs, tri-fold cuffs, zapstraps, zipcuffs, or zip-strips, are lightweight, disposable plastic strips resembling electrical cable ties. They can be carried in large quantities by soldiers and police and are therefore well-suited for situations where many may be needed, such as during large-scale protests and riots. In recent years, airlines have begun to carry plastic handcuffs as a way to restrain disruptive passengers. Disposable restraints could be considered to be cost-inefficient; they cannot be loosened, and must be cut off to permit a restrained subject to be fingerprinted, or to attend to bodily functions. It is not unheard of for a single subject to receive five or more sets of disposable restraints in their first few hours in custody.
However, aforementioned usage means that cheap handcuffs are available in situations where steel ones would normally lie unused for long times. Recent products have been introduced that serve to address this concern, including disposable plastic restraints that can be opened or loosened with a key; more expensive than conventional plastic restraints, they can only be used a very limited number of times, and are not as strong as conventional disposable restraints, let alone modern metal handcuffs. In addition, plastic restraints are believed by many to be more likely to inflict nerve or soft-tissue damage to the wearer than metal handcuffs.
Legcuffs
Legcuffs are similar to handcuffs, but have a larger inner perimeter so that they fit around a person's ankles. Some models consist of elliptically contoured cuffs so that they widely adapt to the anatomy of the ankle, minimizing pressure on the Achilles' tendon. Standard-type leg irons have a longer chain connecting the two cuffs compared to handcuffs.
On occasions when a suspect exhibits extremely aggressive behavior, leg irons may be used in addition to handcuffs; sometimes the chain connecting the leg irons to one another is looped around the chain of the handcuffs, and then the leg irons are applied, resulting in the person being "hog-tied". In a few rare cases, hog-tied persons lying on their stomachs have died from positional asphyxia, making the practice highly controversial, and leading to its being severely restricted, or even completely banned, in many localities.
Legcuffs are also used when transporting prisoners outside of a secure area to prevent attempts to escape. When being placed in standard legcuffs, the prisoner will still have the possibility to manage normal steps and can therefore walk independently, but is prevented from running. When the connecting chain between the legcuffs is shortened, the prisoner will have even difficulties to walk so that the flight risk is further minimized. In this case, the prisoner will have to be carried by the transporting officers or has to be moved in a wheelchair.
In some countries, prisoners are permanently shackled with legcuffs even when they are held in their cells. Such a long term use of leg shackles may soon result in pressure marks on the prisoner's ankles and will cause serious harm. Therefore, such a treatment of prisoners is commonly considered a cruel and unusual punishment.
Combinations
Some prisoners being transported from custody to outside locations, for appearances at court, to medical facilities, etc., will wear handcuffs augmented with a belly chain. In this type of arrangement a metal, leather, or canvas belt is attached to the waist, sometimes with a locking mechanism. The handcuffs are secured to the belly chain and the prisoner's hands are kept at waist level. This allows a relative degree of comfort for the prisoner during prolonged internment in the securing device, while providing a greater degree of restriction to movement than simply placing the handcuffs on the wrists in the front. When the handcuffs are concealed by a handcuff cover and secured at the prisoner's waist by a belly chain, this combination will result in a rather more severe restraint and the restrained person may feel discomfort or even pain.
For added security, some transport restraints have a pair of leg irons connected to a pair of handcuffs or a belly chain by a longer connector chain. These combinations further restrict the detainee's freedom of movement and prevent them from escaping.
Security
Double locks
Handcuffs with double locks have a detent which when engaged stops the cuff from ratcheting tighter to prevent the wearer from tightening them. Tightening could be intentional or by struggling; if tightened, the handcuffs may cause nerve damage or loss of circulation. Also some wearers could tighten the cuffs to attempt an escape by having the officer loosen the cuffs and attempting to escape while the cuffs are loose. Double locks also make picking the locks more difficult.
There exist three kinds of double locks as described in a Smith & Wesson brochure:
Lever lock Movement of a lever on the cuff causes the detent to move into a position that locks the bolt. No tool is required to double lock this type of cuff.
Push pin lock A small peg on the key is inserted endwise into a hole to engage the detent.
Slot lock These also are actuated with a peg, but in this case it is inserted into a slot and moved sideways to engage the detent.
Double locks are generally disengaged by inserting the key and rotating it in the opposite direction from that used to unlock the cuff.
Keys
Most modern handcuffs in the United States, the United Kingdom and Latin America can be opened with the same standard universal handcuff key. This allows for easier transport of prisoners. However, there are handcuff makers who use keys based on different standards. Maximum security handcuffs require special keys. Handcuff keys usually do not work with thumbcuffs. The Cuff Lock handcuff key padlock uses this same standard key.
To prevent the restrained person from eventually opening the handcuffs with a handcuff key, a handcuff cover may be used to conceal the keyholes of the handcuffs.
Hinged and Rigid Handcuffs
When applied behind the back with keyholes away from the hands, handcuffs connected by a hinge or rigid bar are much more secure than handcuffs connected by a chain. Even with a key in hand it is difficult or impossible to reach the keyholes with it.
Hand positioning
In the past, police officers typically handcuffed an arrested person with their hands in front, but since approximately the mid-1960s behind-the-back handcuffing has been the standard. The vast majority of police academies in the United States today also teach their recruits to apply handcuffs so that the palms of the suspect's hands face outward after the handcuffs are applied. The Jacksonville, Florida Police Department, the Los Angeles County Sheriff's Department and others are notable exceptions, as they favor palms-together handcuffing. This helps prevent radial neuropathy or handcuff neuropathy during extended periods of restraint. Suspects are handcuffed with the keyholes facing up (away from the hands) to make it difficult to open them even with a key or improvised lock-pick.
Because a person's hands are used in breaking falls, being handcuffed introduces a significant risk of injury if the prisoner trips or stumbles, in addition to injuries sustained from overly tight handcuffs causing handcuff neuropathy. Police officers having custody of the person need to be ready to catch a stumbling prisoner.
As soon as restraints go on, the officer has full liability. The risk of the prisoner losing balance is higher if the hands are handcuffed behind the back than if they are handcuffed in front; however, the risk of using fisted hands together as a weapon increases with hands in front.
Escaping
Since handcuffs are only intended as temporary restraints, they are not the most complicated of locks.
There are several ways of escaping from handcuffs:
slipping hands out when the hands are smaller than the wrist
lock picking
releasing the pawl with a shim
opening the handcuffs with a duplicate key, often hidden on the body of the performer before the performance.
The above methods are often used in escapology. As most people's hands are larger than their wrists, the first method was much easier before the invention of modern ratchet cuffs, which can be adjusted to a variety of sizes. Modern handcuffs are generally ratcheted until they are too tight to be slipped off the hands. However, slipping out of ratchet cuffs is still possible. During his shows, Harry Houdini was frequently secured with multiple pairs of handcuffs. Any pair that was too difficult to be picked was placed on his upper arms. Being very muscular, his upper arms were far larger than his hands. Once he had picked the locks on the lower pairs of handcuffs, the upper pair could simply be slipped off.
It is also technically possible to break free from handcuffs by applying massive amounts of force from one's arms to cause the device to split apart or loosen enough to squeeze one's hands through; however, this takes exceptional strength (especially with handcuffs made of steel). This also puts an immense amount of pressure on the biceps and triceps muscles, and when tried by suspects (even unsuccessfully) can lead to injury, including bruising around the wrists, or tearing the muscles used (including pulling them off their attachments to the bones).
Another common method of escaping (or attempting to escape) from being handcuffed behind the back, is that one would, from a sitting or lying position, bring one's legs up as high upon one's torso as possible, then push one's arms down to bring the handcuffs below one's feet, finally pulling the handcuffs up using one's arms to the front of one's body. This can lead to awkward or painful positions depending on how the handcuffs were applied, and typically requires a good amount of flexibility. It can also be done from a standing position, where, with some degree of effort, the handcuffed hands are slid around the hips and down the buttocks to the feet; then sliding each foot up and over the cuffs. These maneuvers, and the reverse (otherwise impossible) maneuver of bringing the handcuffed hands up behind the back and forwards over the head and then down in front, can be done fairly easily by some people who were born without collarbones because of the inherited deformity called cleidocranial dysostosis.
From this position, one has a better chance of attempting to use a tool (such as a shim or lockpick) to work one's way out of the handcuffs.
National regulations regarding depiction of handcuffed suspects
In Japan, if an arrested suspect of crime was photographed or filmed while handcuffed, their hands have to be pixelated if it is used on TV or in the newspapers. This is because Kazuyoshi Miura, who had been arrested on suspicion of the murder of his wife, brought a successful case to court arguing that being pictured in handcuffs implied guilt, and had prejudiced the trial.
Similarly, in France, a law prohibits media from airing images of people in handcuffs, or otherwise restrained, before they have been convicted by a court. Also in Italy the Code of criminal procedure prohibits the publication of images of people deprived of personal liberty while they are handcuffed or subjected to other means of physical coercion. According to the Italian independent authority on data protection, the same prohibition applies when the image of the handcuffs is pixelated.
In Hong Kong, people being arrested and led away in handcuffs are usually given the chance by the policemen to have their heads covered by a black cloth bag.
In Sri Lanka, women are generally not handcuffed by the police.
The High Court in Windhoek, Namibia, prohibited in mid July 2020 the use of handcuffs under any circumstances, as it violates the constitution.
Use in BDSM
Police handcuffs are sometimes used in sexual bondage and BDSM activities. This is potentially unsafe, because they were not designed for this purpose, and their use can result in nerve injury (handcuff neuropathy) or other tissue damage. Bondage cuffs were designed specifically for this application. They were designed using the same model of soft restraints used on psychiatric patients because they can be worn for long periods of time. Many such models can be fastened shut with padlocks.
Metaphorical uses
Handcuffs are familiar enough for the word to be used in metaphors, e.g.:
Golden handcuffs – an incentive given to an employee by a firm, most or all of which must be repaid to the company if the employee leaves the firm within a specified period of time.
As a verb, meaning to be kept from doing something by another's action or inaction – "He said that his computer work is handcuffed by his internet provider's refusal to accept .zip files."
In fantasy football, one strategy is to have both a star player and his backup, or "handcuff", on a team's roster of players. If the star is injured, the handcuff will be his likely replacement.
Handcuffs gesture
In the 'handcuffs gesture' the arms are crossed at the wrists in front of the chest, to represent being handcuffed. Uses are:
By police, to mean "Allow yourself to be handcuffed".
It has been known to be used to mean "We support our comrade who has been arrested".
By José Mourinho at a football match at F.C. Internazionale Milano.
| Technology | Law enforcement equipment | null |
168369 | https://en.wikipedia.org/wiki/Membrane%20protein | Membrane protein | Membrane proteins are common proteins that are part of, or interact with, biological membranes. Membrane proteins fall into several broad categories depending on their location. Integral membrane proteins are a permanent part of a cell membrane and can either penetrate the membrane (transmembrane) or associate with one or the other side of a membrane (integral monotopic). Peripheral membrane proteins are transiently associated with the cell membrane.
Membrane proteins are common, and medically important—about a third of all human proteins are membrane proteins, and these are targets for more than half of all drugs. Nonetheless, compared to other classes of proteins, determining membrane protein structures remains a challenge in large part due to the difficulty in establishing experimental conditions that can preserve the correct (native) conformation of the protein in isolation from its native environment.
Function
Membrane proteins perform a variety of functions vital to the survival of organisms:
Membrane receptor proteins relay signals between the cell's internal and external environments.
Transport proteins move molecules and ions across the membrane. They can be categorized according to the Transporter Classification database.
Membrane enzymes may have many activities, such as oxidoreductase, transferase or hydrolase.
Cell adhesion molecules allow cells to identify each other and interact. For example, proteins involved in immune response
The localization of proteins in membranes can be predicted reliably using hydrophobicity analyses of protein sequences, i.e. the localization of hydrophobic amino acid sequences.
Integral membrane proteins
Integral membrane proteins are permanently attached to the membrane. Such proteins can be separated from the biological membranes only using detergents, nonpolar solvents, or sometimes denaturing agents. They can be classified according to their relationship with the bilayer:
Integral polytopic proteins are transmembrane proteins that span across the membrane more than once. These proteins may have different transmembrane topology. These proteins have one of two structural architectures:
Helix bundle proteins, which are present in all types of biological membranes;
Beta barrel proteins, which are found only in outer membranes of Gram-negative bacteria, and outer membranes of mitochondria and chloroplasts.
Bitopic proteins are transmembrane proteins that span across the membrane only once. Transmembrane helices from these proteins have significantly different amino acid distributions to transmembrane helices from polytopic proteins.
Integral monotopic proteins are integral membrane proteins that are attached to only one side of the membrane and do not span the whole way across.
Peripheral membrane proteins
Peripheral membrane proteins are temporarily attached either to the lipid bilayer or to integral proteins by a combination of hydrophobic, electrostatic, and other non-covalent interactions. Peripheral proteins dissociate following treatment with a polar reagent, such as a solution with an elevated pH or high salt concentrations.
Integral and peripheral proteins may be post-translationally modified, with added fatty acid, diacylglycerol or prenyl chains, or GPI (glycosylphosphatidylinositol), which may be anchored in the lipid bilayer.
Polypeptide toxins
Polypeptide toxins and many antibacterial peptides, such as colicins or hemolysins, and certain proteins involved in apoptosis, are sometimes considered a separate category. These proteins are water-soluble but can undergo significant conformational changes, form oligomeric complexes and associate irreversibly or reversibly with the lipid bilayer.
In genomes
Membrane proteins, like soluble globular proteins, fibrous proteins, and disordered proteins, are common. It is estimated that 20–30% of all genes in most genomes encode for membrane proteins. For instance, about 1000 of the ~4200 proteins of E. coli are thought to be membrane proteins, 600 of which have been experimentally verified to be membrane resident. In humans, current thinking suggests that fully 30% of the genome encodes membrane proteins.
In disease
Membrane proteins are the targets of over 50% of all modern medicinal drugs. Among the human diseases in which membrane proteins have been implicated are heart disease, Alzheimer's and cystic fibrosis.
Purification of membrane proteins
Although membrane proteins play an important role in all organisms, their purification has historically, and continues to be, a huge challenge for protein scientists. In 2008, 150 unique structures of membrane proteins were available, and by 2019 only 50 human membrane proteins had had their structures elucidated. In contrast, approximately 25% of all proteins are membrane proteins. Their hydrophobic surfaces make structural and especially functional characterization difficult. Detergents can be used to render membrane proteins water-soluble, but these can also alter protein structure and function. Making membrane proteins water-soluble can also be achieved through engineering the protein sequence, replacing selected hydrophobic amino acids with hydrophilic ones, taking great care to maintain secondary structure while revising overall charge.
Affinity chromatography is one of the best solutions for purification of membrane proteins. The polyhistidine-tag is a commonly used tag for membrane protein purification, and the alternative rho1D4 tag has also been successfully used.
| Biology and health sciences | Cell parts | Biology |
168389 | https://en.wikipedia.org/wiki/Arithmetic%20progression | Arithmetic progression | An arithmetic progression or arithmetic sequence is a sequence of numbers such that the difference from any succeeding term to its preceding term remains constant throughout the sequence. The constant difference is called common difference of that arithmetic progression. For instance, the sequence 5, 7, 9, 11, 13, 15, . . . is an arithmetic progression with a common difference of 2.
If the initial term of an arithmetic progression is and the common difference of successive members is , then the -th term of the sequence () is given by
A finite portion of an arithmetic progression is called a finite arithmetic progression and sometimes just called an arithmetic progression. The sum of a finite arithmetic progression is called an arithmetic series.
History
According to an anecdote of uncertain reliability, in primary school Carl Friedrich Gauss reinvented the formula for summing the integers from 1 through , for the case , by grouping the numbers from both ends of the sequence into pairs summing to 101 and multiplying by the number of pairs. Regardless of the truth of this story, Gauss was not the first to discover this formula. Similar rules were known in antiquity to Archimedes, Hypsicles and Diophantus; in China to Zhang Qiujian; in India to Aryabhata, Brahmagupta and Bhaskara II; and in medieval Europe to Alcuin, Dicuil, Fibonacci, Sacrobosco, and anonymous commentators of Talmud known as Tosafists. Some find it likely that its origin goes back to the Pythagoreans in the 5th century BC.
Sum
Computation of the sum 2 + 5 + 8 + 11 + 14. When the sequence is reversed and added to itself term by term, the resulting sequence has a single repeated value in it, equal to the sum of the first and last numbers (2 + 14 = 16). Thus 16 × 5 = 80 is twice the sum.
The sum of the members of a finite arithmetic progression is called an arithmetic series. For example, consider the sum:
This sum can be found quickly by taking the number n of terms being added (here 5), multiplying by the sum of the first and last number in the progression (here 2 + 14 = 16), and dividing by 2:
In the case above, this gives the equation:
This formula works for any arithmetic progression of real numbers beginning with and ending with . For example,
Derivation
To derive the above formula, begin by expressing the arithmetic series in two different ways:
Rewriting the terms in reverse order:
Adding the corresponding terms of both sides of the two equations and halving both sides:
This formula can be simplified as:
Furthermore, the mean value of the series can be calculated via: :
The formula is essentially the same as the formula for the mean of a discrete uniform distribution, interpreting the arithmetic progression as a set of equally probable outcomes.
Product
The product of the members of a finite arithmetic progression with an initial element a1, common differences d, and n elements in total is determined in a closed expression
where denotes the Gamma function. The formula is not valid when is negative or zero.
This is a generalization of the facts that the product of the progression is given by the factorial and that the product
for positive integers and is given by
Derivation
where denotes the rising factorial.
By the recurrence formula , valid for a complex number ,
,
,
so that
for a positive integer and a positive complex number.
Thus, if ,
,
and, finally,
Examples
Example 1
Taking the example , the product of the terms of the arithmetic progression given by up to the 50th term is
Example 2
The product of the first 10 odd numbers is given by
=
Standard deviation
The standard deviation of any arithmetic progression is
where is the number of terms in the progression and is the common difference between terms. The formula is essentially the same as the formula for the standard deviation of a discrete uniform distribution, interpreting the arithmetic progression as a set of equally probable outcomes.
Intersections
The intersection of any two doubly infinite arithmetic progressions is either empty or another arithmetic progression, which can be found using the Chinese remainder theorem. If each pair of progressions in a family of doubly infinite arithmetic progressions have a non-empty intersection, then there exists a number common to all of them; that is, infinite arithmetic progressions form a Helly family. However, the intersection of infinitely many infinite arithmetic progressions might be a single number rather than itself being an infinite progression.
Amount of arithmetic subsets of length k of the set {1,...,n}
Let denote the number of arithmetic subsets of length one can make from the set and let be defined as:
Then:
As an example, if one expects arithmetic subsets and, counting directly, one sees that there are 9; these are
| Mathematics | Sequences | null |
168393 | https://en.wikipedia.org/wiki/Polystyrene | Polystyrene | Polystyrene (PS) is a synthetic polymer made from monomers of the aromatic hydrocarbon styrene. Polystyrene can be solid or foamed. General-purpose polystyrene is clear, hard, and brittle. It is an inexpensive resin per unit weight. It is a poor barrier to air and water vapor and has a relatively low melting point. Polystyrene is one of the most widely used plastics, with the scale of its production being several million tonnes per year. Polystyrene is naturally transparent, but can be colored with colorants. Uses include protective packaging (such as packing peanuts and optical disc jewel cases), containers, lids, bottles, trays, tumblers, disposable cutlery, in the making of models, and as an alternative material for phonograph records.
As a thermoplastic polymer, polystyrene is in a solid (glassy) state at room temperature but flows if heated above about 100 °C, its glass transition temperature. It becomes rigid again when cooled. This temperature behaviour is exploited for extrusion (as in Styrofoam) and also for molding and vacuum forming, since it can be cast into molds with fine detail. The temperatures behavior can be controlled by photocrosslinking.
Under ASTM standards, polystyrene is regarded as not biodegradable. It is accumulating as a form of litter in the outside environment, particularly along shores and waterways, especially in its foam form, and in the Pacific Ocean.
History
Polystyrene was discovered in 1839 by Eduard Simon, an apothecary from Berlin. From storax, the resin of the Oriental sweetgum tree Liquidambar orientalis, he distilled an oily substance, that he named styrol, now called styrene. Several days later, Simon found that it had thickened into a jelly, now known to have been a polymer, that he dubbed styrol oxide ("Styroloxyd") because he presumed that it had resulted from oxidation (styrene oxide is a distinct compound). By 1845 Jamaican-born chemist John Buddle Blyth and German chemist August Wilhelm von Hofmann showed that the same transformation of styrol took place in the absence of oxygen. They called the product "meta styrol"; analysis showed that it was chemically identical to Simon's Styroloxyd. In 1866 Marcellin Berthelot correctly identified the formation of meta styrol/Styroloxyd from styrol as a polymerisation process. About 80 years later it was realized that heating of styrol starts a chain reaction that produces macromolecules, following the thesis of German organic chemist Hermann Staudinger (1881–1965). This eventually led to the substance receiving its present name, polystyrene.
The company I. G. Farben began manufacturing polystyrene in Ludwigshafen, about 1931, hoping it would be a suitable replacement for die-cast zinc in many applications. Success was achieved when they developed a reactor vessel that extruded polystyrene through a heated tube and cutter, producing polystyrene in pellet form.
Ray McIntire (1918–1996), a chemical engineer of Dow Chemical, rediscovered a process first patented in early 1930s by Swedish inventor Carl Munters. According to the Science History Institute, "Dow bought the rights to Munters's method and began producing a lightweight, water-resistant, and buoyant material that seemed perfectly suited for building docks and watercraft and for insulating homes, offices, and chicken sheds." In 1944, Styrofoam was patented.
Before 1949, chemical engineer Fritz Stastny (1908–1985) developed pre-expanded PS beads by incorporating aliphatic hydrocarbons, such as pentane. These beads are the raw material for molding parts or extruding sheets. BASF and Stastny applied for a patent that was issued in 1949. The molding process was demonstrated at the Kunststoff Messe 1952 in Düsseldorf. Products were named Styropor.
The crystal structure of isotactic polystyrene was reported by Giulio Natta.
In 1954, the Koppers Company in Pittsburgh, Pennsylvania, developed expanded polystyrene (EPS) foam under the trade name Dylite. In 1960, Dart Container, the largest manufacturer of foam cups, shipped their first order.
Structure and production
In chemical terms, polystyrene is a long chain hydrocarbon wherein alternating carbon centers are attached to phenyl groups (a derivative of benzene). Polystyrene's chemical formula is ; it contains the chemical elements carbon and hydrogen.
The material's properties are determined by short-range van der Waals attractions between polymer chains. Since the molecules consist of thousands of atoms, the cumulative attractive force between the molecules is large. When heated (or deformed at a rapid rate, due to a combination of viscoelastic and thermal insulation properties), the chains can take on a higher degree of confirmation and slide past each other. This intermolecular weakness (versus the high intramolecular strength due to the hydrocarbon backbone) confers flexibility and elasticity. The ability of the system to be readily deformed above its glass transition temperature allows polystyrene (and thermoplastic polymers in general) to be readily softened and molded upon heating. Extruded polystyrene is about as strong as an unalloyed aluminium but much more flexible and much less dense (1.05 g/cm3 for polystyrene vs. 2.70 g/cm3 for aluminium).
Production
Polystyrene is an addition polymer that results when styrene monomers polymerize (interconnect). In the polymerization, the carbon-carbon π bond of the vinyl group is broken and a new carbon-carbon σ bond is formed, attaching to the carbon of another styrene monomer to the chain. Since only one kind of monomer is used in its preparation, it is a homopolymer. The newly formed σ bond is stronger than the π bond that was broken, thus it is difficult to depolymerize polystyrene. About a few thousand monomers typically comprise a chain of polystyrene, giving a molar mass of 100,000–400,000 g/mol.
Each carbon of the backbone has tetrahedral geometry, and those carbons that have a phenyl group (benzene ring) attached are stereogenic. If the backbone were to be laid as a flat elongated zig-zag chain, each phenyl group would be tilted forward or backward compared to the plane of the chain.
The relative stereochemical relationship of consecutive phenyl groups determines the tacticity, which affects various physical properties of the material.
Tacticity
In polystyrene, tacticity describes the extent to which the phenyl group is uniformly aligned (arranged at one side) in the polymer chain. Tacticity has a strong effect on the properties of the plastic. Standard polystyrene is atactic. The diastereomer where all of the phenyl groups are on the same side is called isotactic polystyrene, which is not produced commercially.
Atactic polystyrene
The only commercially important form of polystyrene is atactic, in which the phenyl groups are randomly distributed on both sides of the polymer chain. This random positioning prevents the chains from aligning with sufficient regularity to achieve any crystallinity. The plastic has a glass transition temperature Tg of ≈90 °C. Polymerization is initiated with free radicals.
Syndiotactic polystyrene
Ziegler–Natta polymerization can produce an ordered syndiotactic polystyrene with the phenyl groups positioned on alternating sides of the hydrocarbon backbone. This form is highly crystalline with a Tm (melting point) of . Syndiotactic polystyrene resin is currently produced under the trade name XAREC by Idemitsu corporation, who use a metallocene catalyst for the polymerisation reaction.
Degradation
Polystyrene is relatively chemically inert. While it is waterproof and resistant to breakdown by many acids and bases, it is easily attacked by many organic solvents (e.g. it dissolves quickly when exposed to acetone), chlorinated solvents, and aromatic hydrocarbon solvents. Because of its resilience and inertness, it is used for fabricating many objects of commerce. Like other organic compounds, polystyrene burns to give carbon dioxide and water vapor, in addition to other thermal degradation by-products. Polystyrene, being an aromatic hydrocarbon, typically combusts incompletely as indicated by the sooty flame.
The process of depolymerizing polystyrene into its monomer, styrene, is called pyrolysis. This involves using high heat and pressure to break down the chemical bonds between each styrene compound. Pyrolysis usually goes up to 430 °C. The high energy cost of doing this has made commercial recycling of polystyrene back into styrene monomer difficult.
Organisms
Polystyrene is generally considered to be non-biodegradable. However, certain organisms are able to degrade it, albeit very slowly.
In 2015, researchers discovered that mealworms, the larvae form of the darkling beetle Tenebrio molitor, could digest and subsist healthily on a diet of EPS. About 100 mealworms could consume between 34 and 39 milligrams of this white foam in a day. The droppings of mealworm were found to be safe for use as soil for crops.
In 2016, it was also reported that superworms (Zophobas morio) may eat expanded polystyrene (EPS). A group of high school students in Ateneo de Manila University found that compared to Tenebrio molitor larvae, Zophobas morio larvae may consume greater amounts of EPS over longer periods of time.
In 2022 scientists identified several bacterial genera, including Pseudomonas, Rhodococcus and Corynebacterium, in the gut of superworms that contain encoded enzymes associated with the degradation of polystyrene and the breakdown product styrene.
The bacterium Pseudomonas putida is capable of converting styrene oil into the biodegradable plastic PHA. This may someday be of use in the effective disposing of polystyrene foam. It is worthy to note the polystyrene must undergo pyrolysis to turn into styrene oil.
Forms produced
Polystyrene is commonly injection molded, vacuum formed, or extruded, while expanded polystyrene is either extruded or molded in a special process.
Polystyrene copolymers are also produced; these contain one or more other monomers in addition to styrene. In recent years the expanded polystyrene composites with cellulose and starch have also been produced. Polystyrene is used in some polymer-bonded explosives (PBX).
Sheet or molded polystyrene
Polystyrene (PS) is used for producing disposable plastic cutlery and dinnerware, CD "jewel" cases, smoke detector housings, license plate frames, plastic model assembly kits, and many other objects where a rigid, economical plastic is desired. Production methods include thermoforming (vacuum forming) and injection molding.
Polystyrene Petri dishes and other laboratory containers such as test tubes and microplates play an important role in biomedical research and science. For these uses, articles are almost always made by injection molding, and often sterilized post-molding, either by irradiation or by treatment with ethylene oxide. Post-mold surface modification, usually with oxygen-rich plasmas, is often done to introduce polar groups. Much of modern biomedical research relies on the use of such products; they, therefore, play a critical role in pharmaceutical research.
Thin sheets of polystyrene are used in polystyrene film capacitors as it forms a very stable dielectric, but has largely fallen out of use in favor of polyester.
Foams
Polystyrene foams are 95–98% air. Polystyrene foams are good thermal insulators and are therefore often used as building insulation materials, such as in insulating concrete forms and structural insulated panel building systems. Grey polystyrene foam, incorporating graphite, has superior insulation properties.
Carl Munters and John Gudbrand Tandberg of Sweden received a US patent for polystyrene foam as an insulation product in 1935 (USA patent number 2,023,204).
PS foams also exhibit good damping properties, therefore it is used widely in packaging. The trademark Styrofoam by Dow Chemical Company is informally used (mainly US & Canada) for all foamed polystyrene products, although strictly it should only be used for "extruded closed-cell" polystyrene foams made by Dow Chemicals.
Foams are also used for non-weight-bearing architectural structures (such as ornamental pillars).
Expanded polystyrene (EPS)
Expanded polystyrene (EPS) is a rigid and tough, closed-cell foam with a normal density range of 11 to 32 kg/m3. It is usually white and made of pre-expanded polystyrene beads. The manufacturing process for EPS conventionally begins with the creation of small polystyrene beads. Styrene monomers (and potentially other additives) are suspended in water, where they undergo free-radical addition polymerization. The polystyrene beads formed by this mechanism may have an average diameter of around 200 μm. The beads are then permeated with a "blowing agent", a material that enables the beads to be expanded. Pentane is commonly used as the blowing agent. The beads are added to a continuously agitated reactor with the blowing agent, among other additives, and the blowing agent seeps into pores within each bead. The beads are then expanded using steam.
EPS is used for food containers, molded sheets for building insulation, and packing material either as solid blocks formed to accommodate the item being protected or as loose-fill "peanuts" cushioning fragile items inside boxes. EPS also has been widely used in automotive and road safety applications such as motorcycle helmets and road barriers on automobile race tracks.
A significant portion of all EPS products are manufactured through injection molding. Mold tools tend to be manufactured from steels (which can be hardened and plated), and aluminum alloys. The molds are controlled through a split via a channel system of gates and runners. EPS is colloquially called "styrofoam" in the Anglosphere, an genericization of Dow Chemical's brand of extruded polystyrene.
EPS in building construction
Sheets of EPS are commonly packaged as rigid panels (common in Europe is a size of 100 cm x 50 cm, usually depending on an intended type of connection and glue techniques, it is, in fact, 99.5 cm x 49.5 cm or 98 cm x 48 cm; less common is 120 x 60 cm; size or in the United States). Common thicknesses are from 10 mm to 500 mm. Many customizations, additives, and thin additional external layers on one or both sides are often added to help with various properties. An example of this is lamination with cement board to form a structural insulated panel.
Thermal conductivity is measured according to EN 12667. Typical values range from 0.032 to 0.038 W/(m⋅K) depending on the density of the EPS board. The value of 0.038 W/(m⋅K) was obtained at 15 kg/m3 while the value of 0.032 W/(m⋅K) was obtained at 40 kg/m3 according to the datasheet of K-710 from StyroChem Finland. Adding fillers (graphites, aluminum, or carbons) has recently allowed the thermal conductivity of EPS to reach around 0.030–0.034 W/(m⋅K) (as low as 0.029 W/(m⋅K)) and as such has a grey/black color which distinguishes it from standard EPS. Several EPS producers have produced a variety of these increased thermal resistance EPS usage for this product in the UK and EU.
Water vapor diffusion resistance (μ) of EPS is around 30–70.
ICC-ES (International Code Council Evaluation Service) requires EPS boards used in building construction meet ASTM C578 requirements. One of these requirements is that the limiting oxygen index of EPS as measured by ASTM D2863 be greater than 24 volume %. Typical EPS has an oxygen index of around 18 volume %; thus, a flame retardant is added to styrene or polystyrene during the formation of EPS.
The boards containing a flame retardant when tested in a tunnel using test method UL 723 or ASTM E84 will have a flame spread index of less than 25 and a smoke-developed index of less than 450. ICC-ES requires the use of a 15-minute thermal barrier when EPS boards are used inside of a building.
According to the EPS-IA ICF organization, the typical density of EPS used for insulated concrete forms (expanded polystyrene concrete) is . This is either Type II or Type IX EPS according to ASTM C578. EPS blocks or boards used in building construction are commonly cut using hot wires.
Extruded polystyrene (XPS)
Extruded polystyrene foam (XPS) consists of closed cells. It offers improved surface roughness, higher stiffness and reduced thermal conductivity. The density range is about 28–34 kg/m3.
Extruded polystyrene material is also used in crafts and model building, in particular architectural models. Because of the extrusion manufacturing process, XPS does not require facers to maintain its thermal or physical property performance. Thus, it makes a more uniform substitute for corrugated cardboard. Thermal conductivity varies between 0.029 and 0.039 W/(m·K) depending on bearing strength/density and the average value is ≈0.035 W/(m·K).
Water vapor diffusion resistance (μ) of XPS is around 80–250.
Commonly extruded polystyrene foam materials include:
Styrofoam, also known as Blue Board, produced by DuPont
Depron, a thin insulation sheet also used for model building
Water absorption of polystyrene foams
Although it is a closed-cell foam, both expanded and extruded polystyrene are not entirely waterproof or vapor proof. In expanded polystyrene there are interstitial gaps between the expanded closed-cell pellets that form an open network of channels between the bonded pellets, and this network of gaps can become filled with liquid water. If the water freezes into ice, it expands and can cause polystyrene pellets to break off from the foam. Extruded polystyrene is also permeable by water molecules and can not be considered a vapor barrier.
Water-logging commonly occurs over a long period in polystyrene foams that are constantly exposed to high humidity or are continuously immersed in water, such as in hot tub covers, in floating docks, as supplemental flotation under boat seats, and for below-grade exterior building insulation constantly exposed to groundwater. Typically an exterior vapor barrier such as impermeable plastic sheeting or a sprayed-on coating is necessary to prevent saturation.
Oriented polystyrene
Oriented polystyrene (OPS) is produced by stretching extruded PS film, improving visibility through the material by reducing haziness and increasing stiffness. This is often used in packaging where the manufacturer would like the consumer to see the enclosed product. Some benefits to OPS are that it is less expensive to produce than other clear plastics such as polypropylene (PP), (PET), and high-impact polystyrene (HIPS), and it is less hazy than HIPS or PP. The main disadvantage of OPS is that it is brittle, and will crack or tear easily.
Co-polymers
Ordinary (homopolymeric) polystyrene has an excellent property profile about transparency, surface quality and stiffness. Its range of applications is further extended by copolymerization and other modifications (blends e.g. with PC and syndiotactic polystyrene). Several copolymers are used based on styrene: The brittleness of homopolymeric polystyrene is overcome by elastomer-modified styrene-butadiene copolymers. Copolymers of styrene and acrylonitrile (SAN) are more resistant to thermal stress, heat and chemicals than homopolymers and are also transparent. Copolymers called ABS have similar properties and can be used at low temperatures, but they are opaque.
Styrene-butane co-polymers
Styrene-butane co-polymers can be produced with a low butene content. Styrene-butane co-polymers include PS-I and SBC (see below), both co-polymers are impact resistant. PS-I is prepared by graft co-polymerization, SBC by anionic block co-polymerization, which makes it transparent in case of appropriate block size.
If styrene-butane co-polymer has a high butylene content, styrene-butadiene rubber (SBR) is formed.
The impact strength of styrene-butadiene co-polymers is based on phase separation, polystyrene and poly-butane are not soluble in each other (see Flory–Huggins solution theory). Co-polymerization creates a boundary layer without complete mixing. The butadiene fractions (the "rubber phase") assemble to form particles embedded in a polystyrene matrix. A decisive factor for the improved impact strength of styrene-butadiene copolymers is their higher absorption capacity for deformation work. Without applied force, the rubber phase initially behaves like a filler. Under tensile stress, crazes (microcracks) are formed, which spread to the rubber particles. The energy of the propagating crack is then transferred to the rubber particles along its path. A large number of cracks give the originally rigid material a laminated structure. The formation of each lamella contributes to the consumption of energy and thus to an increase in elongation at break. Polystyrene homo-polymers deform when a force is applied until they break. Styrene-butane co-polymers do not break at this point, but begin to flow, solidify to tensile strength and only break at much higher elongation.
With a high proportion of polybutadiene, the effect of the two phases is reversed. Styrene-butadiene rubber behaves like an elastomer but can be processed like a thermoplastic.
Impact-resistant polystyrene (PS-I)
PS-I (impact resistant polystyrene) consists of a continuous polystyrene matrix and a rubber phase dispersed therein. It is produced by polymerization of styrene in the presence of polybutadiene dissolved (in styrene). Polymerization takes place simultaneously in two ways:
Graft copolymerization: The growing polystyrene chain reacts with a double bond of the polybutadiene. As a result, several polystyrene chains are attached to one polybutadiene.
S represents in the figure the styrene repeat unit
B the butadiene repeat unit. However, the middle block often does not consist of such depicted butane homo-polymer but of a styrene-butadiene co-polymer:
SSSSSSSSSSSSSSSSSSSBBSBBSBSBBBBSBSSBBBSBSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
By using a statistical copolymer at this position, the polymer becomes less susceptible to cross-linking and flows better in the melt. For the production of SBS, the first styrene is homopolymerized via anionic copolymerization. Typically, an organometallic compound such as butyllithium is used as a catalyst. Butadiene is then added and after styrene again its polymerization. The catalyst remains active during the whole process (for which the used chemicals must be of high purity). The molecular weight distribution of the polymers is very low (polydispersity in the range of 1.05, the individual chains have thus very similar lengths). The length of the individual blocks can be adjusted by the ratio of catalyst to monomer. The size of the rubber sections, in turn, depends on the block length. The production of small structures (smaller than the wavelength of the light) ensure transparency. In contrast to PS-I, however, the block copolymer does not form any particles but has a lamellar structure.
Styrene-butadiene rubber
Styrene-butadiene rubber (SBR) is produced like PS-I by graft copolymerization, but with a lower styrene content. Styrene-butadiene rubber thus consists of a rubber matrix with a polystyrene phase dispersed therein. Unlike PS-I and SBC, it is not a thermoplastic, but an elastomer. Within the rubber phase, the polystyrene phase is assembled into domains. This causes physical cross-linking on a microscopic level. When the material is heated above the glass transition point, the domains disintegrate, the cross-linking is temporarily suspended and the material can be processed like a thermoplastic.
Acrylonitrile butadiene styrene
Acrylonitrile butadiene styrene (ABS) is a material that is stronger than pure polystyrene.
Others
SMA is a copolymer with maleic anhydride. Styrene can be copolymerized with other monomers; for example, divinylbenzene can be used for cross-linking the polystyrene chains to give the polymer used in solid phase peptide synthesis. Styrene-acrylonitrile resin (SAN) has a greater thermal resistance than pure styrene.
Environmental issues
Production
Polystyrene foams are produced using blowing agents that form bubbles and expand the foam. In expanded polystyrene, these are usually hydrocarbons such as pentane, which may pose a flammability hazard in manufacturing or storage of newly manufactured material, but have relatively mild environmental impact. Extruded polystyrene is usually made with hydrofluorocarbons (HFC-134a), which have global warming potentials of approximately 1000–1300 times that of carbon dioxide. Packaging, particularly expanded polystyrene, is a contributor of microplastics from both land and maritime activities.
Environmental degradation
Polystyrene is not biodegradeable but it is susceptible to photo-oxidation. For this reason commercial products contain light stabilizers.
Litter
Animals do not recognize polystyrene foam as an artificial material and may even mistake it for food.
Polystyrene foam blows in the wind and floats on water due to its low specific gravity. It can have serious effects on the health of birds and marine animals that swallow significant quantities. Juvenile rainbow trout exposed to polystyrene fragments show toxic effects in the form of substantial histomorphometrical changes.
Reducing
Restricting the use of foamed polystyrene takeout food packaging is a priority of many solid waste environmental organisations. Efforts have been made to find alternatives to polystyrene, especially foam in restaurant settings. The original impetus was to eliminate chlorofluorocarbons (CFC), which was a former component of foam.
United States
In 1987, Berkeley, California, banned CFC food containers. The following year, Suffolk County, New York, became the first U.S. jurisdiction to ban polystyrene in general. However, legal challenges by the Society of the Plastics Industry kept the ban from going into effect until at last it was delayed when the Republican and Conservative parties gained the majority of the county legislature. In the meantime, Berkeley became the first city to ban all foam food containers. As of 2006, about one hundred localities in the United States, including Portland, Oregon, and San Francisco had some sort of ban on polystyrene foam in restaurants. For instance, in 2007 Oakland, California, required restaurants to switch to disposable food containers that would biodegrade if added to food compost. In 2013, San Jose became reportedly the largest city in the country to ban polystyrene foam food containers. Some communities have implemented wide polystyrene bans, such as Freeport, Maine, which did so in 1990. In 1988, the first U.S. ban of general polystyrene foam was enacted in Berkeley, California.
On 1 July 2015, New York City became the largest city in the United States to attempt to prohibit the sale, possession, and distribution of single-use polystyrene foam (the initial decision was overturned on appeal). In San Francisco, supervisors approved the toughest ban on "Styrofoam" (EPS) in the US which went into effect 1 January 2017. The city's Department of the Environment can make exceptions for certain uses like shipping medicines at prescribed temperatures.
The U.S. Green Restaurant Association does not allow polystyrene foam to be used as part of its certification standard. Several green leaders, including the Dutch Ministry of the Environment, advise people to reduce their environmental harm by using reusable coffee cups.
In March 2019, Maryland banned polystyrene foam food containers and became the first state in the country to pass a food container foam ban through the state legislature. Maine was the first state to officially get a foam food container ban onto the books. In May 2019, Maryland Governor Hogan allowed the foam ban (House Bill 109) to become law without a signature making Maryland the second state to have a food container foam ban on the books, but is the first one to take effect on 1 July 2020.
In September 2020, the New Jersey state legislature voted to ban disposable foam food containers and cups made of polystyrene foam.
Outside the United States
China banned expanded polystyrene takeout/takeaway containers and tableware around 1999. However, compliance has been a problem and, in 2013, the Chinese plastics industry was lobbying for the ban's repeal.
India and Taiwan also banned polystyrene-foam food-service ware before 2007.
The government of Zimbabwe, through its Environmental Management Agency (EMA), banned polystyrene containers (popularly called 'kaylite' in the country), under Statutory Instrument 84 of 2012 (Plastic Packaging and Plastic Bottles) (Amendment) Regulations, 2012 (No 1.)
The city of Vancouver, Canada, has announced its Zero Waste 2040 plan in 2018. The city will introduce bylaw amendments to prohibit business license holders from serving prepared food in polystyrene foam cups and take-out containers, beginning 1 June 2019.
In 2019, the European Union voted to ban expanded polystyrene food packaging and cups, with the law officially going into effect in 2021.
Fiji passed the Environmental Management Bill in December 2020. Imports of polystyrene products were banned in January 2021.
Recycling
In general, polystyrene is not accepted in curbside collection recycling programs and is not separated and recycled where it is accepted. In Germany, polystyrene is collected as a consequence of the packaging law (Verpackungsverordnung) that requires manufacturers to take responsibility for recycling or disposing of any packaging material they sell.
Most polystyrene products are currently not recycled due to the lack of incentive to invest in the compactors and logistical systems required. Due to the low density of polystyrene foam, it is not economical to collect. However, if the waste material goes through an initial compaction process, the material changes density from typically 30 kg/m3 to 330 kg/m3 and becomes a recyclable commodity of high value for producers of recycled plastic pellets. Expanded polystyrene scrap can be easily added to products such as EPS insulation sheets and other EPS materials for construction applications; many manufacturers cannot obtain sufficient scrap because of collection issues. When it is not used to make more EPS, foam scrap can be turned into products such as clothes hangers, park benches, flower pots, toys, rulers, stapler bodies, seedling containers, picture frames, and architectural molding from recycled PS. As of 2016, around 100 tonnes of EPS are recycled every month in the UK.
Recycled EPS is also used in many metal casting operations. Rastra is made from EPS that is combined with cement to be used as an insulating amendment in the making of concrete foundations and walls. American manufacturers have produced insulating concrete forms made with approximately 80% recycled EPS since 1993.
Upcycling
A March 2022 joint study by scientists Sewon Oh and Erin Stache at Cornell University in Ithaca, New York found a new processing method of upcycling polystyrene to benzoic acid. The process involved irradiation of polystyrene with iron chloride and acetone under white light and oxygen for 20 hours. The scientists also demonstrated a similar scalable commercial process of upcycling polystyrene into valuable small-molecules (like benzoic acid) taking just a few hours.
Incineration
If polystyrene is properly incinerated at high temperatures (up to 1000 °C) and with plenty of air (14 m3/kg), the chemicals generated are water, carbon dioxide, and possibly small amounts of residual halogen-compounds from flame-retardants. If only incomplete incineration is done, there will also be leftover carbon soot and a complex mixture of volatile compounds. According to the American Chemistry Council, when polystyrene is incinerated in modern facilities, the final volume is 1% of the starting volume; most of the polystyrene is converted into carbon dioxide, water vapor, and heat. Because of the amount of heat released, it is sometimes used as a power source for steam or electricity generation.
When polystyrene was burned at temperatures of 800–900 °C (the typical range of a modern incinerator), the products of combustion consisted of "a complex mixture of polycyclic aromatic hydrocarbons (PAHs) from alkyl benzenes to benzoperylene. Over 90 different compounds were identified in combustion effluents from polystyrene." The American National Bureau of Standards Center for Fire Research found 57 chemical by-products released during the combustion of expanded polystyrene (EPS) foam.
Safety
Health
The American Chemistry Council, formerly known as the Chemical Manufacturers' Association, writes:
From 1999 to 2002, a comprehensive review of the potential health risks associated with exposure to styrene was conducted by a 12-member international expert panel selected by the Harvard Center for Risk Assessment. The scientists had expertise in toxicology, epidemiology, medicine, risk analysis, pharmacokinetics, and exposure assessment. The Harvard study reported that styrene is naturally present in trace quantities in foods such as strawberries, beef, and spices, and is naturally produced in the processing of foods such as wine and cheese. The study also reviewed all the published data on the quantity of styrene contributing to the diet due to migration of food packaging and disposable food contact articles, and concluded that risk to the general public from exposure to styrene from foods or food-contact applications (such as polystyrene packaging and foodservice containers) was at levels too low to produce adverse effects.
Polystyrene is commonly used in containers for food and drinks. The styrene monomer (from which polystyrene is made) is a cancer suspect agent. Styrene is "generally found in such low levels in consumer products that risks aren't substantial". Polystyrene which is used for food contact may not contain more than 1% (0.5% for fatty foods) of styrene by weight. Styrene oligomers in polystyrene containers used for food packaging have been found to migrate into the food. Another Japanese study conducted on wild-type and AhR-null mice found that the styrene trimer, which the authors detected in cooked polystyrene container-packed instant foods, may increase thyroid hormone levels.
Whether polystyrene can be microwaved with food is controversial. Some containers may be safely used in a microwave, but only if labeled as such. Some sources suggest that foods containing carotene (vitamin A) or cooking oils must be avoided.
Because of the pervasive use of polystyrene, these serious health related issues remain topical.
Fire hazards
Like other organic compounds, polystyrene is flammable. Polystyrene is classified according to DIN4102 as a "B3" product, meaning highly flammable or "Easily Ignited". As a consequence, although it is an efficient insulator at low temperatures, its use is prohibited in any exposed installations in building construction if the material is not flame-retardant. It must be concealed behind drywall, sheet metal, or concrete. Foamed polystyrene plastic materials have been accidentally ignited and caused huge fires and losses of life, for example at the Düsseldorf International Airport and in the Channel Tunnel (where polystyrene was inside a railway carriage that caught fire).
| Physical sciences | Polymers | Chemistry |
168503 | https://en.wikipedia.org/wiki/King%20cobra | King cobra | The king cobra (Ophiophagus hannah) is a species complex of snakes endemic to Asia. With an average of and a record length of , it is the world's longest venomous snake and among the heaviest. Under the genus Ophiophagus, it is not phylogenetically a true cobra despite its common name and some resemblance. Spanning from the Indian Subcontinent through Southeastern Asia to Southern China, the king cobra is widely distributed albeit not commonly seen.
Individuals have diversified colouration across its habitats, from black with white strips to unbroken brownish grey, although after taxonomic re-evaluation, it is no longer the sole member of its genus but is now a species complex; these differences in pattern and other aspects may cause the genus to be split into at least four species, spread across its large geographic range.
It chiefly hunts other snakes, including those of its own kind, although other lizards and rodents are occasional prey items. This is the only ophidian that constructs an above-ground nest for its eggs, which are purposefully and meticulously gathered and protected by the female throughout the incubation period. Typical threat display of this elapid includes neck-flap spreading, head raising, hissing and sometimes charging. Capable of striking at a considerable range and height with an immense venom yield, envenomation from this species may induce rapid onset of neurotoxic and cytotoxic symptoms, requiring prompt antivenom administration. Despite the fearsome reputation, aggression toward humans usually only arises from an individual inadvertently exposing itself or being cornered; encounters happen through chance, including negative interactions.
Threatened by habitat destruction, it has been listed as Vulnerable on the IUCN Red List since 2010. Regarded as the national reptile of India, it has an eminent position in the mythology and folk traditions of India, Bangladesh, Sri Lanka and Myanmar.
Etymology
The king cobra is also referred to by the common name "hamadryad", especially in older literature. Hamadryas hannah was the scientific name used by Danish naturalist Theodore Edward Cantor in 1836 who described four king cobra specimens, three captured in the Sundarbans and one in the vicinity of Kolkata. The origin of the species name hannah was not specified during description and has long been uncertain, but may potentially refer to Hannah Sarah Wallich, the eldest daughter of Cantor's uncle, botanist Nathaniel Wallich, who hosted Cantor during his studies in India.
Taxonomy
The genus Ophiophagus was proposed by Günther in 1864 in place of Hamadryas, as the genus Hamadryas was already used for the cracker butterflies. The name is derived from its propensity to eat snakes. Ophiophagus hannah was accepted as the valid name for the king cobra by Charles Mitchill Bogert in 1945 who argued that it differs significantly from Naja species.
It has been suggested that three more king cobra species exist in addition to O. hannah, namely the Sunda king cobra O. bungarus, the Western Ghats king cobra (O. kaalinga) and the Luzon king cobra (O. salvatana). These distinct genetic lineages are geographically isolated and adapted to specific ecological regions.
Synonyms
In 1838, Cantor proposed the name Hamadryas ophiophagus for the king cobra and explained that it has dental features intermediate between the genera Naja and Bungarus.
Naia vittata proposed by Walter Elliot in 1840 was a king cobra caught offshore near Chennai that was floating in a basket. This provenance is disputed, as wild king cobras have never occurred near Chennai, and an analysis of this specimen has found it to be more similar to the northern king cobra.
Hamadryas elaps proposed by Albert Günther in 1858 were king cobra specimens from the Philippines and Borneo. Günther considered both N. bungarus and N. vittata a variety of H. elaps. Naja ingens proposed by Alexander Willem Michiel van Hasselt in 1882 was a king cobra captured near Tebing Tinggi in northern Sumatra.
The earliest scientific name for the king cobra was Naja bungaroides, given by Friedrich Boie in 1828 based on a juvenile specimen from Java. This description was improperly done, leaving it a nomen nudum at the time. However, Johann Georg Wagler validated the name in 1830 with a sufficient diagnosis, and also proposed a new genus for it, Hoplocephalus. In 1837, Hermann Schlegel used the name Naja bungaroides for his description of the Australian broad-headed snake, which was later reclassified into Wagler's Hoplocephalus, and used the species name Naja bungarus for the king cobra. Since then, the species name Naja/Hoplocephalus bungaroides, originally coined for the king cobra and improperly assigned to the broad-headed snake, became conflated with the broad-headed snake and used as the type species of Hoplocephalus, while the species name Naja bungarus was treated as a junior synonym of the king cobra (until its revival as the species name for the Sunda king cobra in 2024). This longstanding discrepancy, which breaks the principle of priority, was overlooked for nearly two centuries and only discovered in 2024. Due to the long presence of the names Ophiophagus hannah and Hoplocephalus bungaroides in the literature, which would be upended if these two species were reclassified based on this issue, it was decided to maintain the longstanding scientific names for both taxa and designate a new, accurate type specimen for the broad-headed snake.
Evolution
A genetic analysis using cytochrome b, and a multigene analysis showed that the king cobra was an early offshoot of a genetic lineage giving rise to the mambas, rather than the Naja cobras.
A phylogenetic analysis of mitochondrial DNA showed that specimens from Surat Thani Province and Nakhon Si Thammarat Provinces in southern Thailand form a deeply genetically divergent clade from those in northern Thailand, which grouped with specimens from Myanmar and Guangdong in southern China.
Description
The king cobra's skin is olive green with black and white bands on the trunk that converge to the head. The head is covered by 15 drab-coloured and black-edged shields (large scales consistently present between individuals). The muzzle is rounded, and the tongue black. It has two fangs and 3–5 maxillary teeth in the upper jaw, and two rows of teeth in the lower jaw. The nostrils are between two shields. The large eyes have a golden iris and round pupils. Its hood is oval shaped and covered with olive green smooth scales and two black spots between the two lowest scales. Its cylindrical tail is yellowish green above and marked with black.
It has a pair of large occipital scales on top of the head, 17 to 19 rows of smooth oblique scales on the neck, and 15 rows on the body. Juveniles are black with chevron shaped white, yellow or buff bars that point towards the head.
Adult king cobras are long. The longest known individual measured . Ventral scales are uniformly oval shaped. Dorsal scales are placed in an oblique arrangement.
The king cobra is sexually dimorphic, with males being larger and paler in particular during the breeding season. Males captured in Kerala measured up to and weighed up to . Females captured had a maximum length of and a weight of .
The largest known king cobra was long and captured in Thailand.
It differs from other cobra species by size and hood. It is larger, has a narrower and longer stripe on the neck.
Distribution and habitat
The king cobra has a wide distribution throughout tropical Asia. It occurs in elevations of from the Terai in India and southern Nepal to the Brahmaputra River basin in Bhutan and northeast India, down to Bangladesh, Myanmar, southern China, Cambodia, Thailand, Laos, Vietnam; to the maritime Southeast Asian countries of Malaysia, Singapore, Indonesia and the Philippines.
In northern India, it has been recorded in Garhwal and Kumaon, and in the Sivalik hills and terai regions of Uttarakhand and Uttar Pradesh. In northeast India, the king cobra has been recorded in northern West Bengal, Sikkim, Assam, Meghalaya, Arunachal Pradesh, Nagaland, Manipur and Mizoram.
In the Eastern Ghats, it occurs from Tamil Nadu and Andhra Pradesh to coastal Odisha, and also in Bihar and southern West Bengal, especially the Sundarbans. In the Western Ghats, it was recorded in Kerala, Karnataka and Maharashtra, and also in Gujarat. It also occurs on Baratang Island in the Great Andaman chain. It may have reached the furthest west of its distributional range in extreme western India & eastern Pakistan, in the vicinity of Lahore and Palanpur. These populations have sometimes been thought to be the result of introduction by snake charmers or transport along rivers, but are now more likely considered natural populations. However, it remains uncertain if any populations continue to persist there.
Behaviour and ecology
Like other snakes, a king cobra receives chemical information via its forked tongue, which picks up scent particles and transfers them to a sensory receptor (Jacobson's organ) located in the roof of its mouth.
Following envenomation, it swallows its prey whole. Because of its flexible jaws, it can swallow prey much larger than its head. It is considered diurnal because it hunts during the day, but has also been seen at night, rarely.
Diet
The king cobra is an apex predator and dominant over all other snakes except large pythons. Its diet consists primarily of other snakes and lizards, including Indian cobra, banded krait, rat snake, pythons, green whip snake, keelback, banded wolf snake and Blyth's reticulated snake.
It also hunts Malabar pit viper and hump-nosed pit viper by following their odour trails. In Singapore, one was observed swallowing a clouded monitor.
When food is scarce, it also feeds on other small vertebrates, such as birds, and lizards. In some cases, the cobra constricts its prey using its muscular body, though this is uncommon. After a large meal, it may go for many months without another one because of its slow metabolic rate.
Antipredator behavior
The king cobra is not considered aggressive. It usually avoids humans and slinks off when disturbed, but is known to aggressively defend incubating eggs and attack intruders rapidly. When alarmed, it raises the front part of its body, extends the hood, shows the fangs and hisses loudly.
Wild king cobras encountered in Singapore appeared to be placid, but reared up and struck in self defense when cornered.
The king cobra can be easily irritated by closely approaching objects or sudden movements. When raising its body, the king cobra can still move forward to strike with a long distance, and people may misjudge the safe zone. It can deliver multiple bites in a single attack.
The hiss of the king cobra is a much lower pitch than many other snakes and many people thus liken its call to a "growl" rather than a hiss. While the hisses of most snakes are of a broad-frequency span ranging from roughly 3,000 to 13,000 Hz with a dominant frequency near 7,500 Hz, king cobra growls consist solely of frequencies below 2,500 Hz, with a dominant frequency near 600 Hz, a much lower-pitched frequency closer to that of a human voice. Comparative anatomical morphometric analysis has led to a discovery of tracheal diverticula that function as low-frequency resonating chambers in king cobra and its prey, the rat snake, both of which can make similar growls.
Reproduction
The female is gravid for 50 to 59 days.
The king cobra is the only snake that builds a nest using dry leaf litter, starting from late March to late May. Most nests are located at the base of trees, are up to high in the centre and wide at the base. They consist of several layers and have mostly one chamber, into which the female lays eggs.
Clutch size ranges from 7 to 43 eggs, with 6 to 38 eggs hatching after incubation periods of 66 to 105 days. Temperature inside nests is not steady but varies depending on elevation from . Females stay by their nests between two and 77 days. Hatchlings are between long and weigh .
The king cobra was shown to be capable of facultative parthenogenesis. The parthenogenetic mechanism appears to be a variation of meiosis referred to as terminal fusion automixis in which there is fusion of the meiotic products formed at the anaphase II stage of meiosis.
The venom of hatchlings is as potent as that of the adults. They may be brightly marked, but these colours often fade as they mature. They are alert and nervous, being highly aggressive if disturbed.
The average lifespan of a wild king cobra is about 20 years.
Venom
Composition
Venom of the king cobra, produced by the postorbital venom glands, consists primarily of three-finger toxins (3FTx) and snake venom metalloproteinases (SVMPs).
Of all the 3FTx, alpha-neurotoxins are the predominant and most lethal components when cytotoxins and beta-cardiotoxins also exhibit toxicological activities. It is reported that cytotoxicity of its venom varies significantly, depending upon the age and locality of an individual. Clinical cardiotoxicity is not widely observed, nor is nephrotoxicity present among patients bitten by this species, presumably due to the low abundance of the toxins.
SVMPs are the second most protein family isolated from the king cobra's venom, accounting from 11.9% to 24.4% of total venom proteins. The abundance is much higher than that of most cobras which is usually less than 1%. This protein family includes principal toxins responsible for vasculature damage and interference with haemostasis, contributing to bleeding and coagulopathy caused by envenomation of vipers. While there are such haemorrhagins isolated from the king cobra's venom, they only induce species-sensitive haemorrhagic and lethal activities on rabbits and hares, but with minimal effects on mice. Clinical pathophysiology of the king cobra's SVMPs has yet to be well studied, although its substantial quantity suggests involvement in tissue damage and necrosis as a result of inflammatory and proteolytic activities, which are instrumental for foraging and digestive purposes.
Ohanin, a minor vespryn protein component specific to this species, causes hypolocomotion and hyperalgesia in experimental mice. It is believed that it contributes to neurotoxicity on the central nervous system of the victim.
Clinical management
A king cobra's bite, and subsequent envenomation, is an immediate medical emergency in humans or domesticated animals, as, if not treated as soon as possible, death can occur in as little as 30 minutes. Local symptoms include dusky discolouration of skin, edema and pain; in severe cases, swelling extends proximally, with necrosis and tissue sloughing that may require amputation. Onset of general symptoms follows while the venom is targeting the victim's central nervous system, resulting in blurred vision, vertigo, drowsiness, and eventual paralysis. If not treated promptly, it may progress to cardiovascular collapse and, subsequently, coma. Death soon follows due to respiratory failure, among other simultaneous and varied system and organ failures.
Polyvalent antivenom of equine origin is produced by Haffkine Institute and King Institute of Preventive Medicine and Research in India.
A polyvalent antivenom produced by the Thai Red Cross Society can effectively neutralise venom of the king cobra. In India and Thailand, a concoction (or liquid blend) of turmeric (Curcuma longa) and other potent, medically relevant herbs reportedly creates a strong resilience against the venom of the king cobra when ingested. Proper and immediate treatments are critical to avoid death. Successful precedents include a client who recovered and was discharged in 10 days after being treated by accurate antivenom and inpatient care.
It can deliver up to 420 mg venom in dry weight (400–600 mg overall) per bite, with a toxicity in mice of 1.28 mg/kg through intravenous injection, 1.5 to 1.7 mg/kg through subcutaneous injection,
and 1.644 mg/kg through intraperitoneal injection. For research purposes, up to 1 g of venom was obtained through milking.
Relationship with humans
Conservation
In Southeast Asia, the king cobra is threatened foremost by habitat destruction owing to deforestation and expansion of agricultural land. It is also threatened by wildlife smuggling, as well as by poaching, then sold as bushmeat or turned into snake leather, and for use in traditional Chinese medicine.
The king cobra is listed in CITES Appendix II. It is protected in China and Vietnam.
In India, it is placed under Schedule II of Wildlife Protection Act, 1972. Killing a king cobra is punished with imprisonment of up to six years. In the Philippines, king cobras (locally known as banakon) are included under the list of threatened species in the country. It is protected under the Wildlife Resources Conservation and Protection Act (Republic Act No. 9147), which criminalises the killing, trade, and consumption of threatened species with certain exceptions (like indigenous subsistence hunting or immediate threats to human life), with a maximum penalty of two years imprisonment and a fine of ₱20,000.
Cultural significance
The king cobra has an eminent position in the mythology and folklore of India, Bangladesh, Sri Lanka and Myanmar.
A ritual in Myanmar involves a king cobra and a female snake charmer. The charmer is a priestess who is usually tattooed with three pictograms and kisses the snake on the top of its head at the end of the ritual.
Members of the Pakokku clan tattoo themselves with ink mixed with cobra venom on their upper bodies in a weekly inoculation that they believe would protect them from the snake, though no scientific evidence supports this. It is regarded as the national reptile of India. In India, the king cobra is believed to possess exceptional memory; according to a myth, the killer of a king cobra stays in the eyes of the snake as an image, which is later picked up by the snake's partner and used to hunt down the killer for revenge. Because of this myth, whenever a cobra is killed especially in India, the head, if not the entire body, is either crushed or burned to destroy the eyes completely.
| Biology and health sciences | Snakes | Animals |
168506 | https://en.wikipedia.org/wiki/Esophagus | Esophagus | The esophagus (American English), oesophagus (British English), or œsophagus (archaic spelling) (see spelling difference) all ; : ((o)e)(œ)sophagi or ((o)e)(œ)sophaguses), colloquially known also as the food pipe, food tube, or gullet, is an organ in vertebrates through which food passes, aided by peristaltic contractions, from the pharynx to the stomach. The esophagus is a fibromuscular tube, about long in adults, that travels behind the trachea and heart, passes through the diaphragm, and empties into the uppermost region of the stomach. During swallowing, the epiglottis tilts backwards to prevent food from going down the larynx and lungs. The word oesophagus is from Ancient Greek οἰσοφάγος (oisophágos), from οἴσω (oísō), future form of φέρω (phérō, "I carry") + ἔφαγον (éphagon, "I ate").
The wall of the esophagus from the lumen outwards consists of mucosa, submucosa (connective tissue), layers of muscle fibers between layers of fibrous tissue, and an outer layer of connective tissue. The mucosa is a stratified squamous epithelium of around three layers of squamous cells, which contrasts to the single layer of columnar cells of the stomach. The transition between these two types of epithelium is visible as a zig-zag line. Most of the muscle is smooth muscle although striated muscle predominates in its upper third. It has two muscular rings or sphincters in its wall, one at the top and one at the bottom. The lower sphincter helps to prevent reflux of acidic stomach content. The esophagus has a rich blood supply and venous drainage. Its smooth muscle is innervated by involuntary nerves (sympathetic nerves via the sympathetic trunk and parasympathetic nerves via the vagus nerve) and in addition voluntary nerves (lower motor neurons) which are carried in the vagus nerve to innervate its striated muscle.
The esophagus passes through the thoracic cavity into the diaphragm into the stomach.
The esophagus may be affected by gastric reflux, cancer, prominent dilated blood vessels called varices that can bleed heavily, tears, constrictions, and disorders of motility. Diseases may cause difficulty swallowing (dysphagia), painful swallowing (odynophagia), chest pain, or cause no symptoms at all. Clinical investigations include X-rays when swallowing barium sulfate, endoscopy, and CT scans. Surgically,
the esophagus is difficult to access in part due to its position between critical organs and directly between the sternum and spinal column.
Structure
The esophagus is one of the upper parts of the digestive system. There are taste buds on its upper part. It begins at the back of the mouth, passing downward through the rear part of the mediastinum, through the diaphragm, and into the stomach. In humans, the esophagus generally starts around the level of the sixth cervical vertebra behind the cricoid cartilage of the trachea, enters the diaphragm at about the level of the tenth thoracic vertebra, and ends at the cardia of the stomach, at the level of the eleventh thoracic vertebra. The esophagus is usually about 25 cm (10 in) in length.
Many blood vessels serve the esophagus, with blood supply varying along its course. The upper parts of the esophagus and the upper esophageal sphincter receive blood from the inferior thyroid artery, the parts of the esophagus in the thorax from the bronchial arteries and branches directly from the thoracic aorta, and the lower parts of the esophagus and the lower esophageal sphincter receive blood from the left gastric artery and the left inferior phrenic artery. The venous drainage also differs along the course of the esophagus. The upper and middle parts of the esophagus drain into the azygos and hemiazygos veins, and blood from the lower part drains into the left gastric vein. All these veins drain into the superior vena cava, with the exception of the left gastric vein, which is a branch of the portal vein. Lymphatically, the upper third of the esophagus drains into the deep cervical lymph nodes, the middle into the superior and posterior mediastinal lymph nodes, and the lower esophagus into the gastric and celiac lymph nodes. This is similar to the lymphatic drainage of the abdominal structures that arise from the foregut, which all drain into the celiac nodes.
Position
The upper esophagus lies at the back of the mediastinum behind the trachea, adjoining along the tracheoesophageal stripe, and in front of the erector spinae muscles and the vertebral column. The lower esophagus lies behind the heart and curves in front of the thoracic aorta. From the bifurcation of the trachea downwards, the esophagus passes behind the right pulmonary artery, left main bronchus, and left atrium. At this point, it passes through the diaphragm.
The thoracic duct, which drains the majority of the body's lymph, passes behind the esophagus, curving from lying behind the esophagus on the right in the lower part of the esophagus, to lying behind the esophagus on the left in the upper esophagus. The esophagus also lies in front of parts of the hemiazygos veins and the intercostal veins on the right side. The vagus nerve divides and covers the esophagus in a plexus.
Constrictions
The esophagus has four points of constriction. When a corrosive substance, or a solid object is swallowed, it is most likely to lodge and damage one of these four points. These constrictions arise from particular structures that compress the esophagus. These constrictions are:
At the start of the esophagus, where the laryngopharynx joins the esophagus, behind the cricoid cartilage
Where it is crossed on the front by the aortic arch in the superior mediastinum
Where the esophagus is compressed by the left main bronchus in the posterior mediastinum
The esophageal hiatus, where it passes through the diaphragm in the posterior mediastinum
Sphincters
The esophagus is surrounded at the top and bottom by two muscular rings, known respectively as the upper esophageal sphincter and the lower esophageal sphincter. These sphincters act to close the esophagus when food is not being swallowed. The upper esophageal sphincter is an anatomical sphincter, which is formed by the lower portion of the inferior pharyngeal constrictor, also known as the cricopharyngeal sphincter due to its relation with cricoid cartilage of the larynx anteriorly. However, the lower esophageal sphincter is not an anatomical but rather a functional sphincter, meaning that it acts as a sphincter but does not have a distinct thickening like other sphincters.
The upper esophageal sphincter surrounds the upper part of the esophagus. It consists of skeletal muscle but is not under voluntary control. Opening of the upper esophageal sphincter is triggered by the swallowing reflex. The primary muscle of the upper esophageal sphincter is the cricopharyngeal part of the inferior pharyngeal constrictor.
The lower esophageal sphincter, or gastroesophageal sphincter, surrounds the lower part of the esophagus at the junction between the esophagus and the stomach. It is also called the cardiac sphincter or cardioesophageal sphincter, named from the adjacent part of the stomach, the cardia. Dysfunction of the gastroesophageal sphincter causes gastroesophageal reflux, which causes heartburn, and, if it happens often enough, can lead to gastroesophageal reflux disease, with damage of the esophageal mucosa.
Nerve supply
The esophagus is innervated by the vagus nerve and the cervical and thoracic sympathetic trunk. The vagus nerve has a parasympathetic function, supplying the muscles of the esophagus and stimulating glandular contraction. Two sets of nerve fibers travel in the vagus nerve to supply the muscles. The upper striated muscle, and upper esophageal sphincter, are supplied by neurons with bodies in the nucleus ambiguus, whereas fibers that supply the smooth muscle and lower esophageal sphincter have bodies situated in the dorsal motor nucleus. The vagus nerve plays the primary role in initiating peristalsis. The sympathetic trunk has a sympathetic function. It may enhance the function of the vagus nerve, increasing peristalsis and glandular activity, and causing sphincter contraction. In addition, sympathetic activation may relax the muscle wall and cause blood vessel constriction. Sensation along the esophagus is supplied by both nerves, with gross sensation being passed in the vagus nerve and pain passed up the sympathetic trunk.
Gastroesophageal junction
The gastroesophageal junction (also known as the esophagogastric junction) is the junction between the esophagus and the stomach, at the lower end of the esophagus. The pink color of the esophageal mucosa contrasts to the deeper red of the gastric mucosa, and the mucosal transition can be seen as an irregular zig-zag line, which is often called the z-line. Histological examination reveals abrupt transition between the stratified squamous epithelium of the esophagus and the simple columnar epithelium of the stomach. Normally, the cardia of the stomach is immediately distal to the z-line and the z-line coincides with the upper limit of the gastric folds of the cardia; however, when the anatomy of the mucosa is distorted in Barrett's esophagus the true gastroesophageal junction can be identified by the upper limit of the gastric folds rather than the mucosal transition. The functional location of the lower oesophageal sphincter is generally situated about below the z-line.
Microanatomy
The human esophagus has a mucous membrane consisting of a tough stratified squamous epithelium without keratin, a smooth lamina propria, and a muscularis mucosae. The epithelium of the esophagus has a relatively rapid turnover and serves a protective function against the abrasive effects of food. In many animals, the epithelium contains a layer of keratin, representing a coarser diet. There are two types of glands, with mucus-secreting esophageal glands being found in the submucosa and esophageal cardiac glands, similar to cardiac glands of the stomach, located in the lamina propria and most frequent in the terminal part of the organ. The mucus from the glands gives a good protection to the lining. The submucosa also contains the submucosal plexus, a network of nerve cells that is part of the enteric nervous system.
The muscular layer of the esophagus has two types of muscle. The upper third of the esophagus contains striated muscle, the lower third contains smooth muscle, and the middle third contains a mixture of both. Muscle is arranged in two layers: one in which the muscle fibers run longitudinal to the esophagus, and the other in which the fibers encircle the esophagus. These are separated by the myenteric plexus, a tangled network of nerve fibers involved in the secretion of mucus and in peristalsis of the smooth muscle of the esophagus. The outermost layer of the esophagus is the adventitia in most of its length, with the abdominal part being covered in serosa. This makes it distinct from many other structures in the gastrointestinal tract that only have a serosa.
Development
In early embryogenesis, the esophagus develops from the endodermal primitive gut tube. The ventral part of the embryo abuts the yolk sac. During the second week of embryological development, as the embryo grows, it begins to surround parts of the sac. The enveloped portions form the basis for the adult gastrointestinal tract. The sac is surrounded by a network of vitelline arteries. Over time, these arteries consolidate into the three main arteries that supply the developing gastrointestinal tract: the celiac artery, superior mesenteric artery, and inferior mesenteric artery. The areas supplied by these arteries are used to define the midgut, hindgut and foregut.
The surrounded sac becomes the primitive gut. Sections of this gut begin to differentiate into the organs of the gastrointestinal tract, such as the esophagus, stomach, and intestines. The esophagus develops as part of the foregut tube. The innervation of the esophagus develops from the pharyngeal arches.
Function
Swallowing
Food is ingested through the mouth and when swallowed passes first into the pharynx and then into the esophagus. The esophagus is thus one of the first components of the digestive system and the gastrointestinal tract. After food passes through the esophagus, it enters the stomach. When food is being swallowed, the epiglottis moves backward to cover the larynx, preventing food from entering the trachea. At the same time, the upper esophageal sphincter relaxes, allowing a bolus of food to enter. Peristaltic contractions of the esophageal muscle push the food down the esophagus. These rhythmic contractions occur both as a reflex response to food that is in the mouth, and also as a response to the sensation of food within the esophagus itself. Along with peristalsis, the lower esophageal sphincter relaxes.
Reducing gastric reflux
The stomach produces gastric acid, a strongly acidic mixture consisting of hydrochloric acid (HCl) and potassium and sodium salts to enable food digestion. Constriction of the upper and lower esophageal sphincters helps to prevent reflux (backflow) of gastric contents and acid into the esophagus, protecting the esophageal mucosa. The acute angle of His and the lower crura of the diaphragm also help this sphincteric action.
Gene and protein expression
About 20,000 protein-coding genes are expressed in human cells and nearly 70% of these genes are expressed in the normal esophagus. Some 250 of these genes are more specifically expressed in the esophagus with less than 50 genes being highly specific. The corresponding esophagus-specific proteins are mainly involved in squamous differentiation such as keratins KRT13, KRT4 and KRT6C. Other specific proteins that help lubricate the inner surface of esophagus are mucins such as MUC21 and MUC22. Many genes with elevated expression are also shared with skin and other organs that are composed of squamous epithelia.
Clinical significance
The main conditions affecting the esophagus are described here. For a more complete list, see esophageal disease.
Inflammation
Inflammation of the esophagus is known as esophagitis. Reflux of gastric acids from the stomach, infection, substances ingested (for example, corrosives), some medications (such as bisphosphonates), and food allergies can all lead to esophagitis. Esophageal candidiasis is an infection of the yeast Candida albicans that may occur when a person is immunocompromised. the causes of some forms of esophagitis, such as eosinophilic esophagitis, are not well-characterized, but may include Th2-mediated atopies or genetic factors. There appear to be correlations between eosinophilic esophagitis, asthma (itself with an eosinophilic component), eczema, and allergic rhinitis, though it is not clear whether these conditions contribute to eosinophilic esophagitis or vice versa, or if they are symptoms of mutual underlying factors. Esophagitis can cause painful swallowing and is usually treated by managing the cause of the esophagitis - such as managing reflux or treating infection.
Barrett's esophagus
Prolonged esophagitis, particularly from gastric reflux, is one factor thought to play a role in the development of Barrett's esophagus. In this condition, there is metaplasia of the lining of the lower esophagus, which changes from stratified squamous epithelia to simple columnar epithelia. Barrett's esophagus is thought to be one of the main contributors to the development of esophageal cancer.
Cancer
There are two main types of cancer of the esophagus. Squamous cell carcinoma is a carcinoma that can occur in the squamous cells lining the esophagus. This type is much more common in China and Iran. The other main type is an adenocarcinoma that occurs in the glands or columnar tissue of the esophagus. This is most common in developed countries in those with Barrett's esophagus, and occurs in the cuboidal cells.
In its early stages, esophageal cancer may not have any symptoms at all. When severe, esophageal cancer may eventually cause obstruction of the esophagus, making swallowing of any solid foods very difficult and causing weight loss. The progress of the cancer is staged using a system that measures how far into the esophageal wall the cancer has invaded, how many lymph nodes are affected, and whether there are any metastases in different parts of the body. Esophageal cancer is often managed with radiotherapy, chemotherapy, and may also be managed by partial surgical removal of the esophagus. Inserting a stent into the esophagus, or inserting a nasogastric tube, may also be used to ensure that a person is able to digest enough food and water. , the prognosis for esophageal cancer is still poor, so palliative therapy may also be a focus of treatment.
Varices
Esophageal varices are swollen twisted branches of the azygous vein in the lower third of the esophagus. These blood vessels anastomose (join up) with those of the portal vein when portal hypertension develops. These blood vessels are engorged more than normal, and in the worst cases may partially obstruct the esophagus. These blood vessels develop as part of a collateral circulation that occurs to drain blood from the abdomen as a result of portal hypertension, usually as a result of liver diseases such as cirrhosis. This collateral circulation occurs because the lower part of the esophagus drains into the left gastric vein, which is a branch of the portal vein. Because of the extensive venous plexus that exists between this vein and other veins, if portal hypertension occurs, the direction of blood drainage in this vein may reverse, with blood draining from the portal venous system, through the plexus. Veins in the plexus may engorge and lead to varices.
Esophageal varices often do not have symptoms until they rupture. A ruptured varix is considered a medical emergency because varices can bleed a lot. A bleeding varix may cause a person to vomit blood, or suffer shock. To deal with a ruptured varix, a band may be placed around the bleeding blood vessel, or a small amount of a clotting agent may be injected near the bleed. A surgeon may also try to use a small inflatable balloon to apply pressure to stop the wound. IV fluids and blood products may be given in order to prevent hypovolemia from excess blood loss.
Motility disorders
Several disorders affect the motility of food as it travels down the esophagus. This can cause difficult swallowing, called dysphagia, or painful swallowing, called odynophagia. Achalasia refers to a failure of the lower esophageal sphincter to relax properly, and generally develops later in life. This leads to progressive enlargement of the esophagus, and possibly eventual megaesophagus. A nutcracker esophagus refers to swallowing that can be extremely painful. Diffuse esophageal spasm is a spasm of the esophagus that can be one cause of chest pain. Such referred pain to the wall of the upper chest is quite common in esophageal conditions. Sclerosis of the esophagus, such as with systemic sclerosis or in CREST syndrome may cause hardening of the walls of the esophagus and interfere with peristalsis.
Malformations
Esophageal strictures are usually benign and typically develop after a person has had reflux for many years. Other strictures may include esophageal webs (which can also be congenital) and damage to the esophagus by radiotherapy, corrosive ingestion, or eosinophilic esophagitis. A Schatzki ring is fibrosis at the gastroesophageal junction. Strictures may also develop in chronic anemia, and Plummer-Vinson syndrome.
Two of the most common congenital malformations affecting the esophagus are an esophageal atresia where the esophagus ends in a blind sac instead of connecting to the stomach; and an esophageal fistula – an abnormal connection between the esophagus and the trachea. Both of these conditions usually occur together. These are found in about 1 in 3500 births. Half of these cases may be part of a syndrome where other abnormalities are also present, particularly of the heart or limbs. The other cases occur singly.
Imaging
An X-ray of swallowed barium may be used to reveal the size and shape of the esophagus, and the presence of any masses. The esophagus may also be imaged using a flexible camera inserted into the esophagus, in a procedure called an endoscopy. If an endoscopy is used on the stomach, the camera will also have to pass through the esophagus. During an endoscopy, a biopsy may be taken. If cancer of the esophagus is being investigated, other methods, including a CT scan, may also be used.
History
The word esophagus (British English: oesophagus), comes from the () meaning gullet. It derives from two roots (eosin) to carry and () to eat. The use of the word oesophagus, has been documented in anatomical literature since at least the time of Hippocrates, who noted that "the oesophagus ... receives the greatest amount of what we consume." Its existence in other animals and its relationship with the stomach was documented by the Roman naturalist Pliny the Elder (AD23–AD79), and the peristaltic contractions of the esophagus have been documented since at least the time of Galen.
The first attempt at surgery on the esophagus focused in the neck, and was conducted in dogs by Theodore Billroth in 1871. In 1877 Czerny carried out surgery in people. By 1908, an operation had been performed by Voeckler to remove the esophagus, and in 1933 the first surgical removal of parts of the lower esophagus, (to control esophageal cancer), had been conducted.
The Nissen fundoplication, in which the stomach is wrapped around the lower esophageal sphincter to stimulate its function and control reflux, was first conducted by Rudolph Nissen in 1955.
Other animals
Vertebrates
In tetrapods, the pharynx is much shorter, and the esophagus correspondingly longer, than in fish. In the majority of vertebrates, the esophagus is simply a connecting tube, but in some birds, which regurgitate components to feed their young, it is extended towards the lower end to form a crop for storing food before it enters the true stomach. In ruminants, animals with four chambered stomachs, a groove called the sulcus reticuli is often found in the esophagus, allowing milk to drain directly into the hind stomach, the abomasum. In the horse the esophagus is about in length, and carries food to the stomach. A muscular ring, called the cardiac sphincter, connects the stomach to the esophagus. This sphincter is very well developed in horses. This and the oblique angle at which the esophagus connects to the stomach explains why horses cannot vomit. The esophagus is also the area of the digestive tract where horses may have the condition known as choke.
The esophagus of snakes is remarkable for the distension it undergoes when swallowing prey.
In most fish, the esophagus is extremely short, primarily due to the length of the pharynx (which is associated with the gills). However, some fish, including lampreys, chimaeras, and lungfish, have no true stomach, so that the esophagus effectively runs from the pharynx directly to the intestine, and is therefore somewhat longer.
In many vertebrates, the esophagus is lined by stratified squamous epithelium without glands. In fish, the esophagus is often lined with columnar epithelium, and in amphibians, sharks and rays, the esophageal epithelium is ciliated, helping to wash food along, in addition to the action of muscular peristalsis. In addition, in the bat Plecotus auritus, fish and some amphibians, glands secreting pepsinogen or hydrochloric acid have been found.
The muscle of the esophagus in many mammals is initially striated but then becomes smooth muscle in the caudal third or so. In canines and ruminants, however, it is entirely striated to allow regurgitation to feed young (canines) or regurgitation to chew cud (ruminants). It is entirely smooth muscle in amphibians, reptiles and birds.
Contrary to popular belief, an adult human body would not be able to pass through the esophagus of a whale, which generally measures less than in diameter, although in larger baleen whales it may be up to when fully distended.
Invertebrates
A structure with the same name is often found in invertebrates, including molluscs and arthropods, connecting the oral cavity with the stomach. In terms of the digestive system of snails and slugs, the mouth opens into an esophagus, which connects to the stomach. Because of torsion, which is the rotation of the main body of the animal during larval development, the esophagus usually passes around the stomach, and opens into its back, furthest from the mouth. In species that have undergone de-torsion, however, the esophagus may open into the anterior of the stomach, which is the reverse of the usual gastropod arrangement. There is an extensive rostrum at the front of the esophagus in all carnivorous snails and slugs. In the freshwater snail species Tarebia granifera, the brood pouch is above the esophagus.
In the cephalopods, the brain often surrounds the esophagus.
| Biology and health sciences | Digestive system | null |
168509 | https://en.wikipedia.org/wiki/Constipation | Constipation | Constipation is a bowel dysfunction that makes bowel movements infrequent or hard to pass. The stool is often hard and dry. Other symptoms may include abdominal pain, bloating, and feeling as if one has not completely passed the bowel movement. Complications from constipation may include hemorrhoids, anal fissure or fecal impaction. The normal frequency of bowel movements in adults is between three per day and three per week. Babies often have three to four bowel movements per day while young children typically have two to three per day.
Constipation has many causes. Common causes include slow movement of stool within the colon, irritable bowel syndrome, and pelvic floor disorders. Underlying associated diseases include hypothyroidism, diabetes, Parkinson's disease, celiac disease, non-celiac gluten sensitivity, vitamin B12 deficiency, colon cancer, diverticulitis, and inflammatory bowel disease. Medications associated with constipation include opioids, certain antacids, calcium channel blockers, and anticholinergics. Of those taking opioids about 90% develop constipation. Constipation is more concerning when there is weight loss or anemia, blood is present in the stool, there is a history of inflammatory bowel disease or colon cancer in a person's family, or it is of new onset in someone who is older.
Treatment of constipation depends on the underlying cause and the duration that it has been present. Measures that may help include drinking enough fluids, eating more fiber, consumption of honey and exercise. If this is not effective, laxatives of the bulk-forming agent, osmotic agent, stool softener, or lubricant type may be recommended. Stimulant laxatives are generally reserved for when other types are not effective. Other treatments may include biofeedback or in rare cases surgery.
In the general population rates of constipation are 2–30 percent. Among elderly people living in a care home the rate of constipation is 50–75 percent. People in the United States spend more than on medications for constipation a year.
Definition
Constipation is a symptom, not a disease. Most commonly, constipation is thought of as infrequent bowel movements, usually fewer than 3 stools per week. However, people may have other complaints as well including:
Straining with bowel movements
Excessive time needed to pass a bowel movement
Hard stools
Pain with bowel movements secondary to straining
Abdominal pain
Abdominal bloating.
the sensation of incomplete bowel evacuation.
The Rome III Criteria are a set of symptoms that help standardize the diagnosis of constipation in various age groups. These criteria help physicians to better define constipation in a standardized manner.
Causes
The causes of constipation can be divided into congenital, primary, and secondary. The most common kind is primary and not life-threatening. It can also be divided by the age group affected such as children and adults.
Primary or functional constipation is defined by ongoing symptoms for greater than six months not due to an underlying cause such as medication side effects or an underlying medical condition. It is not associated with abdominal pain, thus distinguishing it from irritable bowel syndrome. It is the most common kind of constipation, and is often multifactorial. In adults, such primary causes include: dietary choices such as insufficient dietary fiber or fluid intake, or behavioral causes such as decreased physical activity. In children, causes can include diets low in fiber and fluids, underlying medical conditions, and reluctance to go to the bathroom. In the elderly, common causes have been attributed to insufficient dietary fiber intake, inadequate fluid intake, decreased physical activity, side effects of medications, hypothyroidism, and obstruction by colorectal cancer. Evidence to support these factors however is poor.
Secondary causes include side effects of medications such as opiates, endocrine and metabolic disorders such as hypothyroidism, and obstruction such as from colorectal cancer or ovarian cancer. Celiac disease and non-celiac gluten sensitivity may also present with constipation. Cystocele can develop as a result of chronic constipation.
Diet
Constipation can be caused or exacerbated by a low-fiber diet, low liquid intake, or dieting. Dietary fiber helps to decrease colonic transport time, increases stool bulk but simultaneously softens stool. Therefore, diets low in fiber can lead to primary constipation.
Medications
Many medications have constipation as a side effect. Some include (but are not limited to) opioids, diuretics, antidepressants, antihistamines, antispasmodics, anticonvulsants, tricyclic antidepressants, antiarrythmics, beta-adrenoceptor antagonists, anti-diarrheals, 5-HT3 receptor antagonists such as ondansetron, and aluminum antacids. Certain calcium channel blockers such as nifedipine and verapamil can cause severe constipation due to dysfunction of motility in the rectosigmoid colon. Supplements such as calcium and iron supplements can also have constipation as a notable side effect.
Medical conditions
Metabolic and endocrine problems which may lead to constipation include: pheochromocytoma, hypercalcemia, hypothyroidism, hyperparathyroidism, porphyria, chronic kidney disease, pan-hypopituitarism, diabetes mellitus, and cystic fibrosis. Constipation is also common in individuals with muscular and myotonic dystrophy.
Systemic diseases that may present with constipation include celiac disease and systemic sclerosis.
Constipation has a number of structural (mechanical, morphological, anatomical) causes, namely through creating space-occupying lesions within the colon that stop the passage of stool, such as colorectal cancer, strictures, rectocoles, anal sphincter damage or malformation and post-surgical changes. Extra-intestinal masses such as other malignancies can also lead to constipation from external compression.
Constipation also has neurological causes, including anismus, descending perineum syndrome, desmosis and Hirschsprung's disease. In infants, Hirschsprung's disease is the most common medical disorder associated with constipation. Anismus occurs in a small minority of persons with chronic constipation or obstructed defecation.
Spinal cord lesions and neurological disorders such as Parkinson's disease and pelvic floor dysfunction can also lead to constipation.
Chagas disease may cause constipation through the destruction of the myenteric plexus.
Psychological
Voluntary withholding of the stool is a common cause of constipation. The choice to withhold can be due to factors such as fear of pain, fear of public restrooms, or laziness. When a child holds in the stool a combination of encouragement, fluids, fiber, and laxatives may be useful to overcome the problem. Early intervention with withholding is important as this can lead to anal fissures.
Congenital
A number of diseases present at birth can result in constipation in children. They are as a group uncommon with Hirschsprung's disease (HD) being the most common. There are also congenital structural anomalies that can lead to constipation, including anterior displacement of the anus, imperforate anus, strictures, and small left colon syndrome.
Pathophysiology
Diagnosis
The diagnosis is typically made based on a person's description of the symptoms. Bowel movements that are difficult to pass, very firm, or made up of small hard pellets (like those excreted by rabbits) qualify as constipation, even if they occur every day. Constipation is traditionally defined as three or fewer bowel movements per week. Other symptoms related to constipation can include bloating, distension, abdominal pain, headaches, a feeling of fatigue and nervous exhaustion, or a sense of incomplete emptying. Although constipation may be a diagnosis, it is typically viewed as a symptom that requires evaluation to discern a cause.
Description
Distinguish between acute (days to weeks) or chronic (months to years) onset of constipation because this information changes the differential diagnosis. This in the context of accompanied symptoms helps physicians discover the cause of constipation. People often describe their constipation as bowel movements that are difficult to pass, firm stool with lumpy or hard consistency, and excessive straining during bowel movements. Bloating, abdominal distension, and abdominal pain often accompany constipation. Chronic constipation (symptoms present at least three days per month for more than three months) associated with abdominal discomfort is often diagnosed as irritable bowel syndrome (IBS) when no obvious cause is found.
Poor dietary habits, previous abdominal surgeries, and certain medical conditions can contribute to constipation. Diseases associated with constipation include hypothyroidism, certain types of cancer, and irritable bowel syndrome. Low fiber intake, inadequate amounts of fluids, poor ambulation or immobility, or medications can contribute to constipation. Once the presence of constipation is identified based on a culmination of the symptoms described above, then the cause of constipation should be figured out.
Separating non-life-threatening from serious causes may be partly based on symptoms. For example, colon cancer may be suspected if a person has a family history of colon cancer, fever, weight loss, and rectal bleeding. Other alarming signs and symptoms include family or personal history of inflammatory bowel disease, age of onset over 50, change in stool caliber, nausea, vomiting, and neurological symptoms like weakness, numbness and difficulty urinating.
Examination
A physical examination should involve at least an abdominal exam and rectal exam. Abdominal exam may reveal an abdominal mass if there is significant stool burden and may reveal abdominal discomfort. Rectal examination gives an impression of the anal sphincter tone and whether the lower rectum contains any feces or not. Rectal examination also gives information on the consistency of the stool, the presence of hemorrhoids, blood and whether any perineal irregularities are present including skin tags, fissures, anal warts. Physical examination is done manually by a physician and is used to guide which diagnostic tests to order.
Diagnostic tests
Functional constipation is common and does not warrant diagnostic testing. Imaging and laboratory tests are typically recommended for those with alarm signs or symptoms.
The laboratory tests performed depends on the suspected underlying cause of the constipation. Tests may include CBC (complete blood count), thyroid function tests, serum calcium, serum potassium, etc.
Abdominal X-rays are generally only performed if bowel obstruction is suspected, may reveal extensive impacted fecal matter in the colon, and may confirm or rule out other causes of similar symptoms.
Colonoscopy may be performed if an abnormality in the colon like a tumor is suspected. Other tests rarely ordered include anorectal manometry, anal sphincter electromyography, and defecography.
Colonic propagating pressure wave sequences (PSs) are responsible for discrete movements of the bowel contents and are vital for normal defecation. Deficiencies in PS frequency, amplitude, and extent of propagation are all implicated in severe defecatory dysfunction (SDD). Mechanisms that can normalize these aberrant motor patterns may help rectify the problem. Recently the novel therapy of sacral nerve stimulation (SNS) has been utilized for the treatment of severe constipation.
Criteria
The Rome III Criteria for functional constipation must include two or more of the following and present for the past three months, with symptoms starting for at least 6 months prior to diagnosis.
Straining during defecation for at least 25% of bowel movements
Lumpy or hard stools in at least 25% of defecations
Sensation of incomplete evacuation for at least 25% of defecations
Sensation of anorectal obstruction/blockage for at least 25% of defecations
Manual maneuvers to facilitate at least 25% of defecations
Fewer than 3 defecations per week
Loose stools are rarely present without the use of laxatives
There are insufficient criteria for irritable bowel syndrome
Prevention
Constipation is usually easier to prevent than to treat. Following the relief of constipation, maintenance with adequate exercise, fluid intake, and high-fiber diet is recommended.
Treatment
A limited number of cases require urgent medical intervention or will result in severe consequences.
The treatment of constipation should focus on the underlying cause if known. The National Institute of Health and Care Excellence (NICE) break constipation in adults into two categories: chronic constipation of unknown cause, and constipation due to opiates.
In chronic constipation of unknown cause, the main treatment involves the increased intake of water and fiber (either dietary or as supplements). The routine use of laxatives or enemas is discouraged, as having bowel movements may come to be dependent upon their use.
Fiber supplements
Soluble fiber supplements such as psyllium are generally considered first-line treatment for chronic constipation, compared to insoluble fibers such as wheat bran. Side effects of fiber supplements include bloating, flatulence, diarrhea, and possible malabsorption of iron, calcium, and some medications. However, patients with opiate-induced constipation will likely not benefit from fiber supplements.
Laxatives
If laxatives are used, milk of magnesia or polyethylene glycol are recommended as first-line agents due to their low cost and safety. Stimulants should only be used if this is not effective. In cases of chronic constipation, polyethylene glycol appears superior to lactulose. Prokinetics may be used to improve gastrointestinal motility. A number of new agents have shown positive outcomes in chronic constipation; these include prucalopride and lubiprostone. Cisapride is widely available in third world countries, but has been withdrawn in most of the west. It has not been shown to have a benefit on constipation, while potentially causing cardiac arrhythmias and deaths.
Enemas
Enemas can be used to provide a form of mechanical stimulation. A large volume or high enema can be given to cleanse as much of the colon as possible of feces, and the solution administered commonly contains castile soap which irritates the colon's lining resulting in increased urgency to defecate. However, a low enema is generally useful only for stool in the rectum, not in the intestinal tract.
Physical intervention
Constipation that resists the above measures may require physical intervention such as manual disimpaction (the physical removal of impacted stool using the hands; see fecal impaction).
Regular exercise
Regular exercise can help improve chronic constipation.
Surgical intervention
In refractory cases, procedures can be performed to help relieve constipation. Sacral nerve stimulation has been demonstrated to be effective in a minority of cases. Colectomy with ileorectal anastomosis is another intervention performed only in patients known to have a slow colonic transit time and in whom a defecation disorder has either been treated or is not present. Because this is a major operation, side effects can include considerable abdominal pain, small bowel obstruction, and post-surgical infections. Furthermore, it has a very variable rate of success and is very case dependent.
Prognosis
Complications that can arise from constipation include hemorrhoids, anal fissures, rectal prolapse, and fecal impaction. Straining to pass stool may lead to hemorrhoids. In later stages of constipation, the abdomen may become distended, hard and diffusely tender. Severe cases ("fecal impaction" or malignant constipation) may exhibit symptoms of bowel obstruction (nausea, vomiting, tender abdomen) and encopresis, where soft stool from the small intestine bypasses the mass of impacted fecal matter in the colon.
Epidemiology
Constipation is the most common chronic gastrointestinal disorder in adults. Depending on the definition employed, it occurs in 2% to 20% of the population. It is more common in women, the elderly and children. Specifically constipation with no known cause affects females more often affected than males. The reasons it occurs more frequently in the elderly is felt to be due to an increasing number of health problems as humans age and decreased physical activity.
12% of the population worldwide reports having constipation.
Chronic constipation accounts for 3% of all visits annually to pediatric outpatient clinics.
Constipation-related health care costs total $6.9 billion in the US annually.
More than four million Americans have frequent constipation, accounting for 2.5 million physician visits a year.
Around $725 million is spent on laxative products each year in America.
History
Since ancient times different societies have published medical opinions about how health care providers should respond to constipation in patients. In various times and places, doctors have made claims that constipation has all sorts of medical or social causes. Doctors in history have treated constipation in reasonable and unreasonable ways, including use of a spatula mundani.
After the advent of the germ theory of disease then the idea of "auto-intoxication" entered popular Western thought in a fresh way. Enema as a scientific medical treatment and colon cleansing as alternative medical treatment became more common in medical practice.
Since the 1700s in the West there has been some popular thought that people with constipation have some moral failing with gluttony or laziness.
Special populations
Children
Approximately 3% of children have constipation, with girls and boys being equally affected. With constipation accounting for approximately 5% of general pediatrician visits and 25% of pediatric gastroenterologist visits, the symptom carries a significant financial impact upon the healthcare system. While it is difficult to assess an exact age at which constipation most commonly arises, children frequently experience constipation in conjunction with life-changes. Examples include: toilet training, starting or transferring to a new school, and changes in diet. Especially in infants, changes in formula or transitioning from breast milk to formula can cause constipation. The majority of constipation cases are not tied to a medical disease, and treatment can be focused on simply relieving the symptoms.
Postpartum women
The six-week period after pregnancy is called the postpartum stage. During this time, women are at increased risk of being constipated. Multiple studies estimate the prevalence of constipation to be around 25% during the first 3 months. Constipation can cause discomfort for women, as they are still recovering from the delivery process especially if they have had a perineal tear or underwent an episiotomy. Risk factors that increase the risk of constipation in this population include:
Damage to the levator ani muscles (pelvic floor muscles) during childbirth
Forceps-assisted delivery
Lengthy second stage of labor
Delivering a large child
Hemorrhoids
Hemorrhoids are common in pregnancy and also may get exacerbated when constipated. Anything that can cause pain with stooling (hemorrhoids, perineal tear, episiotomy) can lead to constipation because patients may withhold from having a bowel movement so as to avoid pain.
The pelvic floor muscles play an important role in helping pass a bowel movement. Injury to those muscles by some of the above risk factors (examples- delivering a large child, lengthy second stage of labor, forceps delivery) can result in constipation. Enemas may be administered during labor and these can also alter bowel movements in the days after giving birth. However, there is insufficient evidence to make conclusions about the effectiveness and safety of laxatives in this group of people.
| Biology and health sciences | Symptoms and signs | Health |
168536 | https://en.wikipedia.org/wiki/Bulimia%20nervosa | Bulimia nervosa | Bulimia nervosa, also known simply as bulimia, is an eating disorder characterized by binge eating (eating large quantities of food in a short period of time, often feeling out of control) followed by compensatory behaviors, such as vomiting, excessive exercise, or fasting to prevent weight gain.
Other efforts to lose weight may include the use of diuretics, stimulants, water fasting, or excessive exercise. Most people with bulimia are at normal weight and have higher risk for other mental disorders, such as depression, anxiety, borderline personality disorder, bipolar disorder, and problems with drugs to alcohol. There is also a higher risk of suicide and self-harm.
Bulimia is more common among those who have a close relative with the condition. The percentage risk that is estimated to be due to genetics is between 30% and 80%. Other risk factors for the disease include psychological stress, cultural pressure to attain a certain body type, poor self-esteem, and obesity. Living in a culture that commercializes or glamorizes dieting, and having parental figures who fixate on weight are also risks.
Diagnosis is based on a person's medical history; however, this is difficult, as people are usually secretive about their binge eating and purging habits. Further, the diagnosis of anorexia nervosa takes precedence over that of bulimia. Other similar disorders include binge eating disorder, Kleine–Levin syndrome, and borderline personality disorder.
==Signs and symptoms==
Bulimia typically involves rapid and out-of-control eating, which is followed by self-induced vomiting or other forms of purging. This cycle may be repeated several times a week or, in more serious cases, several times a day and may directly cause:
Dehydration
Electrolyte imbalance can lead to abnormal heart rhythms, cardiac arrest, and even death
Oral trauma, lacerations to the lining of the mouth or throat due to forced throwing up movements.
Russell's sign: calluses on knuckles and back of hands due to repeated trauma from incisors
Swollen salivary glands (in the neck, under the jawline)
Gastrointestinal problems, like constipation and acid reflux
Constipation or diarrhea
Hypotension
Infertility and/or irregular menstrual cycles
Weight Fluctuations
These are some of the many signs that may indicate whether someone has bulimia nervosa:
A fixation on the number of calories consumed
A fixation on an extreme consciousness of one's weight
Low self-esteem and/or self-harming
Suicidal tendencies
An irregular menstrual cycle in women
Regular trips to the bathroom, especially soon after eating
Depression, anxiety disorders, and sleep disorders
Frequent occurrences involving the consumption of abnormally large portions of food
The use of laxatives, diuretics, and diet pills
Compulsive or excessive exercise
Unhealthy/dry skin, hair, nails, and lips
Fatigue, or exhaustion
As with many psychiatric illnesses, delusions can occur, in conjunction with other signs and symptoms, leaving the person with a false belief that is not ordinarily accepted by others.
People with bulimia nervosa may also exercise to a point that excludes other activities.
Interoceptive
People with bulimia exhibit several interoceptive deficits, in which one experiences impairment in recognizing and discriminating between internal sensations, feelings, and emotions. People with bulimia may also react negatively to somatic and affective states. Regarding interoceptive sensation, hyposensitive individuals may not detect normal feelings of fullness at the appropriate time while eating, and are prone to eating more calories in a short period of time as a result of this decreased sensitivity.
Examining from a neural basis also connects elements of interoception and emotion; notable overlaps occur in the medial prefrontal cortex, anterior and posterior cingulate, and anterior insula cortices, which are linked to both interoception and emotional eating.
Related disorders
People with bulimia are at a higher risk to have an affective disorder, such as depression or general anxiety disorder. One study found 70% had depression at some time in their lives (as opposed to 26% for adult females in the general population), rising to 88% for all affective disorders combined. Another study in the Journal of Affective Disorders found that of the population of patients that were diagnosed with an eating disorder according to the DSM-V guidelines about 27% also suffered from bipolar disorder. Within this article, the majority of the patients were diagnosed with bulimia nervosa, the second most common condition reported was binge-eating disorder. Some individuals with anorexia nervosa exhibit episodes of bulimic tendencies through purging (either through self-induced vomiting or laxatives) as a way to quickly remove food in their system. There may be an increased risk for diabetes mellitus type 2. Bulimia also has negative effects on a person's teeth due to the acid passed through the mouth from frequent vomiting causing acid erosion, mainly on the posterior dental surface.
Research has shown that there is a relationship between bulimia and narcissism. According to a study by the Australian National University, eating disorders are more susceptible among vulnerable narcissists. This can be caused by a childhood in which inner feelings and thoughts were minimized by parents, leading to "a high focus on receiving validation from others to maintain a positive sense of self".
The medical journal Borderline Personality Disorder and Emotion Dysregulation notes that a "substantial rate of patients with bulimia nervosa" also have borderline personality disorder.
A study by the Psychopharmacology Research Program of the University of Cincinnati College of Medicine "leaves little doubt that bipolar and eating disorders—particularly bulimia nervosa and bipolar II disorder—are related." The research shows that most clinical studies indicate that patients with bipolar disorder have higher rates of eating disorders, and vice versa. There is overlap in phenomenology, course, comorbidity, family history, and pharmacologic treatment response of these disorders. This is especially true of "eating dysregulation, mood dysregulation, impulsivity and compulsivity, craving for activity and/or exercise."
Studies have shown a relationship between bulimia's effect on metabolic rate and caloric intake with thyroid dysfunction.
Scientific research has shown that people suffering from bulimia have decreased volumes of brain matter, and that the abnormalities are reversible after long-term recovery.
Causes
Biological
As with anorexia nervosa, there is evidence of genetic predispositions contributing to the onset of this eating disorder. Abnormal levels of many hormones, notably serotonin, have been shown to be responsible for some disordered eating behaviors. Brain-derived neurotrophic factor (BDNF) is under investigation as a possible mechanism.
There is evidence that sex hormones may influence appetite and eating in women and the onset of bulimia nervosa. Studies have shown that women with hyperandrogenism and polycystic ovary syndrome have a dysregulation of appetite, along with carbohydrates and fats. This dysregulation of appetite is also seen in women with bulimia nervosa. In addition, gene knockout studies in mice have shown that mice that have the gene encoding estrogen receptors have decreased fertility due to ovarian dysfunction and dysregulation of androgen receptors. In humans, there is evidence that there is an association between polymorphisms in the ERβ (estrogen receptor β) and bulimia, suggesting there is a correlation between sex hormones and bulimia nervosa.
Bulimia has been compared to drug addiction, though the empirical support for this characterization is limited. However, people with bulimia nervosa may share dopamine D2 receptor-related vulnerabilities with those with substance use disorders.
Dieting, a common behaviour in bulimics, is associated with lower plasma tryptophan levels. Decreased tryptophan levels in the brain, and thus the synthesis of serotonin, such as via acute tryptophan depletion, increases bulimic urges in currently and formerly bulimic individuals within hours.
Abnormal blood levels of peptides important for the regulation of appetite and energy balance are observed in individuals with bulimia nervosa, but it remains unknown if this is a state or trait.
In recent years, evolutionary psychiatry as an emerging scientific discipline has been studying mental disorders from an evolutionary perspective. If eating disorders, Bulimia nervosa in particular, have evolutionary functions or if they are new modern "lifestyle" problems is still debated.
Social
Media portrayals of an 'ideal' body shape are widely considered to be a contributing factor to bulimia. In a 1991 study by Weltzin, Hsu, Pollicle, and Kaye, it was stated that 19% of bulimics undereat, 37% of bulimics eat an average or normal amount of food, and 44% of bulimics overeat. A survey of 15- to 18-year-old high school girls in Nadroga, Fiji, found the self-reported incidence of purging rose from 0% in 1995 (a few weeks after the introduction of television in the province) to 11.3% in 1998. In addition, the suicide rate among people with bulimia nervosa is 7.5 times higher than in the general population.
When attempting to decipher the origin of bulimia nervosa in a cognitive context, Christopher Fairburn et al.s cognitive-behavioral model is often considered the golden standard. Fairburn et al.'s model discusses the process in which an individual falls into the binge-purge cycle and thus develops bulimia. Fairburn et al. argue that extreme concern with weight and shape coupled with low self-esteem will result in strict, rigid, and inflexible dietary rules. Accordingly, this would lead to unrealistically restricted eating, which may consequently induce an eventual "slip" where the individual commits a minor infraction of the strict and inflexible dietary rules. Moreover, the cognitive distortion due to dichotomous thinking leads the individual to binge. The binge subsequently should trigger a perceived loss of control, promoting the individual to purge in hope of counteracting the binge. However, Fairburn et al. assert the cycle repeats itself, and thus consider the binge-purge cycle to be self-perpetuating.
In contrast, Byrne and Mclean's findings differed slightly from Fairburn et al.s cognitive-behavioral model of bulimia nervosa in that the drive for thinness was the major cause of purging as a way of controlling weight. In turn, Byrne and Mclean argued that this makes the individual vulnerable to binging, indicating that it is not a binge-purge cycle but rather a purge-binge cycle in that purging comes before bingeing. Similarly, Fairburn et al.s cognitive-behavioral model of bulimia nervosa is not necessarily applicable to every individual and is certainly reductionist. Every one differs from another, and taking such a complex behavior like bulimia and applying the same one theory to everyone would certainly be invalid. In addition, the cognitive-behavioral model of bulimia nervosa is very culturally bound in that it may not be necessarily applicable to cultures outside of Western society. To evaluate, Fairburn et al..'s model and more generally the cognitive explanation of bulimia nervosa is more descriptive than explanatory, as it does not necessarily explain how bulimia arises. Furthermore, it is difficult to ascertain cause and effect, because it may be that distorted eating leads to distorted cognition rather than vice versa.
A considerable amount of literature has identified a correlation between sexual abuse and the development of bulimia nervosa. The reported incident rate of unwanted sexual contact is higher among those with bulimia nervosa than anorexia nervosa.
When exploring the etiology of bulimia through a socio-cultural perspective, the "thin ideal internalization" is significantly responsible. The thin-ideal internalization is the extent to which individuals adapt to the societal ideals of attractiveness. Studies have shown that young women that read fashion magazines tend to have more bulimic symptoms than those women who do not. This further demonstrates the impact of media on the likelihood of developing the disorder. Individuals first accept and "buy into" the ideals, and then attempt to transform themselves in order to reflect the societal ideals of attractiveness. J. Kevin Thompson and Eric Stice claim that family, peers, and most evidently media reinforce the thin ideal, which may lead to an individual accepting and "buying into" the thin ideal. In turn, Thompson and Stice assert that if the thin ideal is accepted, one could begin to feel uncomfortable with their body shape or size since it may not necessarily reflect the thin ideal set out by society. Thus, people feeling uncomfortable with their bodies may result in body dissatisfaction and may develop a certain drive for thinness. Consequently, body dissatisfaction coupled with a drive for thinness is thought to promote dieting and negative effects, which could eventually lead to bulimic symptoms such as purging or bingeing. Binges lead to self-disgust which causes purging to prevent weight gain.
A study dedicated to investigating the thin ideal internalization as a factor of bulimia nervosa is Thompson's and Stice's research. Their study aimed to investigate how and to what degree media affects the thin ideal internalization. Thompson and Stice used randomized experiments (more specifically programs) dedicated to teaching young women how to be more critical when it comes to media, to reduce thin-ideal internalization. The results showed that by creating more awareness of the media's control of the societal ideal of attractiveness, the thin ideal internalization significantly dropped. In other words, less thin ideal images portrayed by the media resulted in less thin-ideal internalization. Therefore, Thompson and Stice concluded that media greatly affected the thin ideal internalization. Papies showed that it is not the thin ideal itself, but rather the self-association with other persons of a certain weight that decide how someone with bulimia nervosa feels. People that associate themselves with thin models get in a positive attitude when they see thin models and people that associate with overweight get in a negative attitude when they see thin models. Moreover, it can be taught to associate with thinner people.
Diagnosis
The onset of bulimia nervosa is often during adolescence, between 13 and 20 years of age, and many cases have previously experienced obesity, with many relapsing in adulthood into episodic bingeing and purging even after initially successful treatment and remission. A lifetime prevalence of 0.5 percent and 0.9 percent for adults and adolescents, respectively, is estimated among the United States population. Bulimia nervosa may affect up to 1% of young women and, after 10 years of diagnosis, half will recover fully, a third will recover partially, and 10–20% will still have symptoms.
Adolescents with bulimia nervosa are more likely to have self-imposed perfectionism and compulsivity issues in eating compared to their peers. This means that the high expectations and unrealistic goals that these individuals set for themselves are internally motivated rather than by social views or expectations.
Criteria
Bulimia Nervosa is diagnosed using the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). The diagnostic criteria includes the following:
Recurrent episodes of binge eating
Recurrent inappropriate compensatory behavior to prevent weight gain, like self-induced vomiting, misuse of laxatives or other medications, fasting, or excessive exercise.
The binge eating and compensatory behaviors both occur at least once a week for three months
Self-evaluation is influenced by body shape and weight.
Other methods are also used to narrow down the diagnosis, including:
Physical exams: May include measuring your height and weight, checking vital signs, checking skin and nails, and listening to the heart and lungs.
Lab tests: May include a complete blood count, tests to check electrolytes and protein, or a urinalysis might be performed.
Psychological evaluations: A therapist or mental health provider will likely inquire about your thoughts, feelings, and eating habits, as well as asking you to complete a questionnaire.
Treatment
There are two main types of treatment given to those with bulimia nervosa; psychopharmacological and psychosocial treatments.
Psychotherapy
Cognitive behavioral therapy (CBT) is considered the gold standard for the treatment of bulimia nervosa. This approach focuses on helping patients identify and change distorted thought patterns related to eating, body image, and self worth
CBT helps patients identify and challenge the distorted thinking individuals might have about food, weight and body image. It also helps by offering the chance to identify the unhelpful thoughts about food and body image.
By using CBT people record how much food they eat and periods of vomiting with the purpose of identifying and avoiding emotional fluctuations that bring on episodes of bulimia on a regular basis, as a component of this therapy is food journaling. CBT is necessarily good for those with bulimia as it targets the binge-purge cycle, which is the hallmark of bulimia. People undergoing CBT who exhibit early behavioral changes are most likely to achieve the best treatment outcomes in the long run.
Researchers have also reported some positive outcomes for interpersonal psychotherapy and dialectical behavior therapy. These therapies have good outcomes for treating bulimia, especially in patients with emotional regulation difficulties or interpersonal issues. While these therapies are not as extensively research as CBT, they can be beneficial when integrated into a comprehensive treatment plan.
For adolescents, Family-Based therapy (FBT) has been identified as an effective treatment. FBT involes the family in the treatment process, where parents are empowered to take an active role in helping their child recover from bulimia nervosa. This approach is particularly helpful in younger patients who are still living with their families
The use of CBT has been shown to be quite effective for treating bulimia nervosa (BN) in adults, but little research has been done on effective treatments of BN for adolescents. Although CBT is seen as more cost-efficient and helps individuals with BN in self-guided care, Family Based Treatment (FBT) might be more helpful to younger adolescents who need more support and guidance from their families. Adolescents are at the stage where their brains are still quite malleable and developing gradually. Therefore, young adolescents with BN are less likely to realize the detrimental consequences of becoming bulimic and have less motivation to change, which is why FBT would be useful to have families intervene and support the teens. Working with BN patients and their families in FBT can empower the families by having them involved in their adolescent's food choices and behaviors, taking more control of the situation in the beginning and gradually letting the adolescent become more autonomous when they have learned healthier eating habits.
Medication
Antidepressants, particularly selective serotonin reuptake inhibitors (SSRI), are often prescribed to treat bulimia nervosa, especially when comorbid depression or anxiety disorders are present. However, medications alone are generally not sufficient and are typically used in conjunction with psychotherapy Compared to placebo, the use of a single antidepressant has been shown to be effective. Combining medication with counseling can improve outcomes in some circumstances. Some positive outcomes of treatments can include: abstinence from binge eating, a decrease in obsessive behaviors to lose weight and in shape preoccupation, less severe psychiatric symptoms, a desire to counter the effects of binge eating, as well as an improvement in social functioning and reduced relapse rates.
A combination of psychotherapy, especially CBT and pharmacological treatments, such as SSRIs, often lead to better outcomes for individuals with bulimia. Combining both approaches is particularly beneficial in severe or chronic cases, where behavioral modification and mood stabilization are crucial.
Alternative medicine
Some researchers have also claimed positive outcomes in hypnotherapy. The first use of hypnotherapy in Bulimic patients was in 1981. When it comes to hypnotherapy, Bulimic patients are easier to hypnotize than Anorexia Nervosa patients. In Bulimic patients, hypnotherapy focuses on learning self-control when it comes to binging and vomiting, strengthening stimulus control techniques, enhancing ones ego, improving weight control, and helping overweight patients see their body differently (have a different image).
Risk factors
Being female and having bulimia nervosa takes a toll on mental health. Women frequently reported an onset of anxiety at the same time of the onset of bulimia nervosa. The approximate female-to-male ratio of diagnosis is 10:1. In addition to cognitive, genetic, and environmental factors, childhood gastrointestinal problems and early pubertal maturation also increase the likelihood of developing bulimia nervosa. Another concern with eating disorders is developing a coexisting substance use disorder.
Epidemiology
There is little data on the percentage of people with bulimia in general populations. Most studies conducted thus far have been on convenience samples from hospital patients, high school or university students; research on bulimia nervosa among ethnic minorities has also been limited. Existing studies have yielded a wide range of results: between 0.1% and 1.4% of males, and between 0.3% and 9.4% of females. Studies on time trends in the prevalence of bulimia nervosa have also yielded inconsistent results. According to Gelder, Mayou and Geddes (2005) bulimia nervosa is prevalent between 1 and 2 percent of women aged 15–40 years. Bulimia nervosa occurs more frequently in developed countries and in cities, with one study finding that bulimia is five times more prevalent in cities than in rural areas. There is a perception that bulimia is most prevalent amongst girls from middle-class families; however, in a 2009 study girls from families in the lowest income bracket studied were 153 percent more likely to be bulimic than girls from the highest income bracket. According to a study conducted in 2022 by Silen et al., which conglomerated statistics using various methods such as SCID, MRFS, EDE, SSAGA, and EDDI, the US, Finland, Australia, and the Netherlands had an estimated 2.1%, 2.4%, 1.0%, and 0.8% prevalence of bulimia nervosa among females under 30 years of age. This demonstrates the prevalence of bulimia nervosa in developed, Western, first-world countries, indicating an urgency in treating adolescent women. Additionally, these statistics may be misrepresentative of the true population affected with bulimia nervosa due to potential underreporting bias.
There are higher rates of eating disorders in groups involved in activities which idealize a slim physique, such as dance, gymnastics, modeling, cheerleading, running, acting, swimming, diving, rowing and figure skating. Bulimia is thought to be more prevalent among whites; however, a more recent study showed that African-American teenage girls were 50 percent more likely than white girls to exhibit bulimic behavior, including both binging and purging.
History
Etymology
The term bulimia comes from Greek boulīmia, "ravenous hunger", a compound of βοῦς bous, "ox" and λιμός, līmos, "hunger". Literally, the scientific name of the disorder, bulimia nervosa, translates to "nervous ravenous hunger".
Before the 20th century
Although diagnostic criteria for bulimia nervosa did not appear until 1979, evidence suggests that binging and purging were popular in certain ancient cultures. The first documented account of behavior resembling bulimia nervosa was recorded in Xenophon's Anabasis around 370 B.C, in which Greek soldiers purged themselves in the mountains of Asia Minor. It is unclear whether this purging was preceded by binging. In ancient Egypt, physicians recommended purging once a month for three days to preserve health. This practice stemmed from the belief that human diseases were caused by the food itself. In ancient Rome, elite society members would vomit to "make room" in their stomachs for more food at all-day banquets. Emperors Claudius and Vitellius both were gluttonous and obese, and they often resorted to habitual purging.
Historical records also suggest that some saints who developed anorexia (as a result of a life of asceticism) may also have displayed bulimic behaviors. Saint Mary Magdalen de Pazzi (1566–1607) and Saint Veronica Giuliani (1660–1727) were both observed binge eating—giving in, as they believed, to the temptations of the devil. Saint Catherine of Siena (1347–1380) is known to have supplemented her strict abstinence from food by purging as reparation for her sins. Catherine died from starvation at age thirty-three.
While the psychological disorder "bulimia nervosa" is relatively new, the word "bulimia", signifying overeating, has been present for centuries. The Babylon Talmud referenced practices of "bulimia", yet scholars believe that this simply referred to overeating without the purging or the psychological implications bulimia nervosa. In fact, a search for evidence of bulimia nervosa from the 17th to late 19th century revealed that only a quarter of the overeating cases they examined actually vomited after the binges. There was no evidence of deliberate vomiting or an attempt to control weight.
20th century
Globally, bulimia was estimated to affect 3.6 million people in 2015. About 1% of young women have bulimia at a given point in time and about 2% to 3% of women have the condition at some point in their lives. The condition is less common in the developing world. Bulimia is about nine times more likely to occur in women than men. Among women, rates are highest in young adults. Bulimia was named and first described by the British psychiatrist Gerald Russell in 1979.
At the turn of the century, bulimia (overeating) was described as a clinical symptom, but rarely in the context of weight control. Purging, however, was seen in anorexic patients and attributed to gastric pain rather than another method of weight control.
In 1930, admissions of anorexia nervosa patients to the Mayo Clinic from 1917 to 1929 were compiled. Fifty-five to sixty-five percent of these patients were reported to be voluntarily vomiting to relieve weight anxiety. Records show that purging for weight control continued throughout the mid-1900s. Several case studies from this era reveal patients with the modern description of bulimia nervosa. In 1939, Rahman and Richardson reported that out of their six anorexic patients, one had periods of overeating, and another practiced self-induced vomiting. Wulff, in 1932, treated "Patient D", who would have periods of intense cravings for food and overeat for weeks, which often resulted in frequent vomiting. Patient D, who grew up with a tyrannical father, was repulsed by her weight and would fast for a few days, rapidly losing weight. Ellen West, a patient described by Ludwig Binswanger in 1958, was teased by friends for being fat and excessively took thyroid pills to lose weight, later using laxatives and vomiting. She reportedly consumed dozens of oranges and several pounds of tomatoes each day, yet would skip meals. After being admitted to a psychiatric facility for depression, Ellen ate ravenously yet lost weight, presumably due to self-induced vomiting. However, while these patients may have met modern criteria for bulimia nervosa, they cannot technically be diagnosed with the disorder, as it had not yet appeared in the Diagnostic and Statistical Manual of Mental Disorders at the time of their treatment.
An explanation for the increased instances of bulimic symptoms may be due to the 20th century's new ideals of thinness. The shame of being fat emerged in the 1940s when teasing remarks about weight became more common. The 1950s, however, truly introduced the trend of aspiration for thinness.
In 1979, Gerald Russell first published a description of bulimia nervosa, in which he studied patients with a "morbid fear of becoming fat" who overate and purged afterward. He specified treatment options and indicated the seriousness of the disease, which can be accompanied by depression and suicide. In 1980, bulimia nervosa first appeared in the DSM-III.
After its appearance in the DSM-III, there was a sudden rise in the documented incidents of bulimia nervosa. In the early 1980s, incidents of the disorder rose to about 40 in every 100,000 people. This decreased to about 27 in every 100,000 people at the end of the 1980s/early 1990s. However, bulimia nervosa's prevalence was still much higher than anorexia nervosa's, which at the time occurred in about 14 people per 100,000.
In 1991, Kendler et al. documented the cumulative risk for bulimia nervosa for those born before 1950, from 1950 to 1959, and after 1959. The risk for those born after 1959 is much higher than those in either of the other cohorts.
| Biology and health sciences | Mental disorders | Health |
168609 | https://en.wikipedia.org/wiki/Cycle%20%28graph%20theory%29 | Cycle (graph theory) | In graph theory, a cycle in a graph is a non-empty trail in which only the first and last vertices are equal. A directed cycle in a directed graph is a non-empty directed trail in which only the first and last vertices are equal.
A graph without cycles is called an acyclic graph. A directed graph without directed cycles is called a directed acyclic graph. A connected graph without cycles is called a tree.
Definitions
Circuit and cycle
A circuit is a non-empty trail in which the first and last vertices are equal (closed trail).
Let be a graph. A circuit is a non-empty trail with a vertex sequence .
A cycle or simple circuit is a circuit in which only the first and last vertices are equal.
n is called the length of the circuit resp. length of the cycle.
Directed circuit and directed cycle
A directed circuit is a non-empty directed trail in which the first and last vertices are equal (closed directed trail).
Let be a directed graph. A directed circuit is a non-empty directed trail with a vertex sequence .
A directed cycle or simple directed circuit is a directed circuit in which only the first and last vertices are equal.
n is called the length of the directed circuit resp. length of the directed cycle.
Chordless cycle
A chordless cycle in a graph, also called a hole or an induced cycle, is a cycle such that no two vertices of the cycle are connected by an edge that does not itself belong to the cycle. An antihole is the complement of a graph hole. Chordless cycles may be used to characterize perfect graphs: by the strong perfect graph theorem, a graph is perfect if and only if none of its holes or antiholes have an odd number of vertices that is greater than three. A chordal graph, a special type of perfect graph, has no holes of any size greater than three.
The girth of a graph is the length of its shortest cycle; this cycle is necessarily chordless. Cages are defined as the smallest regular graphs with given combinations of degree and girth.
A peripheral cycle is a cycle in a graph with the property that every two edges not on the cycle can be connected by a path whose interior vertices avoid the cycle. In a graph that is not formed by adding one edge to a cycle, a peripheral cycle must be an induced cycle.
Cycle space
The term cycle may also refer to an element of the cycle space of a graph. There are many cycle spaces, one for each coefficient field or ring. The most common is the binary cycle space (usually called simply the cycle space), which consists of the edge sets that have even degree at every vertex; it forms a vector space over the two-element field. By Veblen's theorem, every element of the cycle space may be formed as an edge-disjoint union of simple cycles. A cycle basis of the graph is a set of simple cycles that forms a basis of the cycle space.
Using ideas from algebraic topology, the binary cycle space generalizes to vector spaces or modules over other rings such as the integers, rational or real numbers, etc.
Cycle detection
The existence of a cycle in directed and undirected graphs can be determined by whether a depth-first search (DFS) finds an edge that points to an ancestor of the current vertex (i.e., it contains a back edge). All the back edges which DFS skips over are part of cycles. In an undirected graph, the edge to the parent of a node should not be counted as a back edge, but finding any other already visited vertex will indicate a back edge. In the case of undirected graphs, only O(n) time is required to find a cycle in an n-vertex graph, since at most n − 1 edges can be tree edges.
Many topological sorting algorithms will detect cycles too, since those are obstacles for topological order to exist. Also, if a directed graph has been divided into strongly connected components, cycles only exist within the components and not between them, since cycles are strongly connected.
For directed graphs, distributed message-based algorithms can be used. These algorithms rely on the idea that a message sent by a vertex in a cycle will come back to itself.
Distributed cycle detection algorithms are useful for processing large-scale graphs using a distributed graph processing system on a computer cluster (or supercomputer).
Applications of cycle detection include the use of wait-for graphs to detect deadlocks in concurrent systems.
Algorithm
The aforementioned use of depth-first search to find a cycle can be described as follows:
For every vertex v: visited(v) = finished(v) = false
For every vertex v: DFS(v)
where
DFS(v) =
if finished(v): return
if visited(v):
"Cycle found"
return
visited(v) = true
for every neighbour w: DFS(w)
finished(v) = true
For undirected graphs, "neighbour" means all vertices connected to v, except for the one that recursively called DFS(v). This omission prevents the algorithm from finding a trivial cycle of the form v→w→v; these exist in every undirected graph with at least one edge.
A variant using breadth-first search instead will find a cycle of the smallest possible length.
Covering graphs by cycle
In his 1736 paper on the Seven Bridges of Königsberg, widely considered to be the birth of graph theory, Leonhard Euler proved that, for a finite undirected graph to have a closed walk that visits each edge exactly once (making it a closed trail), it is necessary and sufficient that it be connected except for isolated vertices (that is, all edges are contained in one component) and have even degree at each vertex. The corresponding characterization for the existence of a closed walk visiting each edge exactly once in a directed graph is that the graph be strongly connected and have equal numbers of incoming and outgoing edges at each vertex. In either case, the resulting closed trail is known as an Eulerian trail. If a finite undirected graph has even degree at each of its vertices, regardless of whether it is connected, then it is possible to find a set of simple cycles that together cover each edge exactly once: this is Veblen's theorem. When a connected graph does not meet the conditions of Euler's theorem, a closed walk of minimum length covering each edge at least once can nevertheless be found in polynomial time by solving the route inspection problem.
The problem of finding a single simple cycle that covers each vertex exactly once, rather than covering the edges, is much harder. Such a cycle is known as a Hamiltonian cycle, and determining whether it exists is NP-complete. Much research has been published concerning classes of graphs that can be guaranteed to contain Hamiltonian cycles; one example is Ore's theorem that a Hamiltonian cycle can always be found in a graph for which every non-adjacent pair of vertices have degrees summing to at least the total number of vertices in the graph.
The cycle double cover conjecture states that, for every bridgeless graph, there exists a multiset of simple cycles that covers each edge of the graph exactly twice. Proving that this is true (or finding a counterexample) remains an open problem.
Graph classes defined by cycle
Several important classes of graphs can be defined by or characterized by their cycles. These include:
Bipartite graph, a graph without odd cycles (cycles with an odd number of vertices)
Cactus graph, a graph in which every nontrivial biconnected component is a cycle
Cycle graph, a graph that consists of a single cycle
Chordal graph, a graph in which every induced cycle is a triangle
Directed acyclic graph, a directed graph with no directed cycles
Forest, a cycle-free graph
Line perfect graph, a graph in which every odd cycle is a triangle
Perfect graph, a graph with no induced cycles or their complements of odd length greater than three
Pseudoforest, a graph in which each connected component has at most one cycle
Strangulated graph, a graph in which every peripheral cycle is a triangle
Strongly connected graph, a directed graph in which every edge is part of a cycle
Triangle-free graph, a graph without three-vertex cycles
Even-cycle-free graph, a graph without even cycles
Even-hole-free graph, a graph without even cycles of length larger or equal to 6
| Mathematics | Graph theory | null |
168651 | https://en.wikipedia.org/wiki/High-performance%20liquid%20chromatography | High-performance liquid chromatography | High-performance liquid chromatography (HPLC), formerly referred to as high-pressure liquid chromatography, is a technique in analytical chemistry used to separate, identify, and quantify specific components in mixtures. The mixtures can originate from food, chemicals, pharmaceuticals, biological, environmental and agriculture, etc., which have been dissolved into liquid solutions.
It relies on high pressure pumps, which deliver mixtures of various solvents, called the mobile phase, which flows through the system, collecting the sample mixture on the way, delivering it into a cylinder, called the column, filled with solid particles, made of adsorbent material, called the stationary phase.
Each component in the sample interacts differently with the adsorbent material, causing different migration rates for each component. These different rates lead to separation as the species flow out of the column into a specific detector such as UV detectors. The output of the detector is a graph, called a chromatogram. Chromatograms are graphical representations of the signal intensity versus time or volume, showing peaks, which represent components of the sample. Each sample appears in its respective time, called its retention time, having area proportional to its amount.
HPLC is widely used for manufacturing (e.g., during the production process of pharmaceutical and biological products), legal (e.g., detecting performance enhancement drugs in urine), research (e.g., separating the components of a complex biological sample, or of similar synthetic chemicals from each other), and medical (e.g., detecting vitamin D levels in blood serum) purposes.
Chromatography can be described as a mass transfer process involving adsorption and/or partition. As mentioned, HPLC relies on pumps to pass a pressurized liquid and a sample mixture through a column filled with adsorbent, leading to the separation of the sample components. The active component of the column, the adsorbent, is typically a granular material made of solid particles (e.g., silica, polymers, etc.), 1.5–50 μm in size, on which various reagents can be bonded. The components of the sample mixture are separated from each other due to their different degrees of interaction with the adsorbent particles. The pressurized liquid is typically a mixture of solvents (e.g., water, buffers, acetonitrile and/or methanol) and is referred to as a "mobile phase". Its composition and temperature play a major role in the separation process by influencing the interactions taking place between sample components and adsorbent. These interactions are physical in nature, such as hydrophobic (dispersive), dipole–dipole and ionic, most often a combination.
Operation
The liquid chromatograph is complex and has sophisticated and delicate technology. In order to properly operate the system, there should be a minimum basis for understanding of how the device performs the data processing to avoid incorrect data and distorted results.
HPLC is distinguished from traditional ("low pressure") liquid chromatography because operational pressures are significantly higher (around 50–1400 bar), while ordinary liquid chromatography typically relies on the force of gravity to pass the mobile phase through the packed column. Due to the small sample amount separated in analytical HPLC, typical column dimensions are 2.1–4.6 mm diameter, and 30–250 mm length. Also HPLC columns are made with smaller adsorbent particles (1.5–50 μm in average particle size). This gives HPLC superior resolving power (the ability to distinguish between compounds) when separating mixtures, which makes it a popular chromatographic technique.
The schematic of an HPLC instrument typically includes solvents' reservoirs, one or more pumps, a solvent-degasser, a sampler, a column, and a detector. The solvents are prepared in advance according to the needs of the separation, they pass through the degasser to remove dissolved gasses, mixed to become the mobile phase, then flow through the sampler, which brings the sample mixture into the mobile phase stream, which then carries it into the column. The pumps deliver the desired flow and composition of the mobile phase through the stationary phase inside the column, then directly into a flow-cell inside the detector. The detector generates a signal proportional to the amount of sample component emerging from the column, hence allowing for quantitative analysis of the sample components. The detector also marks the time of emergence, the retention time, which serves for initial identification of the component. More advanced detectors, provide also additional information, specific to the analyte's characteristics, such as UV-VIS spectrum or mass spectrum, which can provide insight on its structural features. These detectors are in common use, such as UV/Vis, photodiode array (PDA) / diode array detector and mass spectrometry detector.
A digital microprocessor and user software control the HPLC instrument and provide data analysis. Some models of mechanical pumps in an HPLC instrument can mix multiple solvents together at a ratios changing in time, generating a composition gradient in the mobile phase. Most HPLC instruments also have a column oven that allows for adjusting the temperature at which the separation is performed.
The sample mixture to be separated and analyzed is introduced, in a discrete small volume (typically microliters), into the stream of mobile phase percolating through the column. The components of the sample move through the column, each at a different velocity, which are a function of specific physical interactions with the adsorbent, the stationary phase. The velocity of each component depends on its chemical nature, on the nature of the stationary phase (inside the column) and on the composition of the mobile phase. The time at which a specific analyte elutes (emerges from the column) is called its retention time. The retention time, measured under particular conditions, is an identifying characteristic of a given analyte.
Many different types of columns are available, filled with adsorbents varying in particle size, porosity, and surface chemistry. The use of smaller particle size packing materials requires the use of higher operational pressure ("backpressure") and typically improves chromatographic resolution (the degree of peak separation between consecutive analytes emerging from the column). Sorbent particles may be ionic, hydrophobic or polar in nature.
The most common mode of liquid chromatography is reversed phase, whereby the mobile phases used, include any miscible combination of water or buffers with various organic solvents (the most common are acetonitrile and methanol). Some HPLC techniques use water-free mobile phases (see normal-phase chromatography below). The aqueous component of the mobile phase may contain acids (such as formic, phosphoric or trifluoroacetic acid) or salts to assist in the separation of the sample components. The composition of the mobile phase may be kept constant ("isocratic elution mode") or varied ("gradient elution mode") during the chromatographic analysis. Isocratic elution is typically effective in the separation of simple mixtures. Gradient elution is required for complex mixtures, with varying interactions with the stationary and mobile phases. This is the reason why in gradient elution the composition of the mobile phase is varied typically from low to high eluting strength. The eluting strength of the mobile phase is reflected by analyte retention times, as the high eluting strength speeds up the elution (resulting in shortening of retention times). For example, a typical gradient profile in reversed phase chromatography for might start at 5% acetonitrile (in water or aqueous buffer) and progress linearly to 95% acetonitrile over 5–25 minutes. Periods of constant mobile phase composition (plateau) may be also part of a gradient profile. For example, the mobile phase composition may be kept constant at 5% acetonitrile for 1–3 min, followed by a linear change up to 95% acetonitrile.
The chosen composition of the mobile phase depends on the intensity of interactions between various sample components ("analytes") and stationary phase (e.g., hydrophobic interactions in reversed-phase HPLC). Depending on their affinity for the stationary and mobile phases, analytes partition between the two during the separation process taking place in the column. This partitioning process is similar to that which occurs during a liquid–liquid extraction but is continuous, not step-wise.
In the example using a water/acetonitrile gradient, the more hydrophobic components will elute (come off the column) later, then, once the mobile phase gets richer in acetonitrile (i.e., in a mobile phase becomes higher eluting solution), their elution speeds up.
The choice of mobile phase components, additives (such as salts or acids) and gradient conditions depends on the nature of the column and sample components. Often a series of trial runs is performed with the sample in order to find the HPLC method which gives adequate separation.
History and development
Prior to HPLC, scientists used benchtop column liquid chromatographic techniques. Liquid chromatographic systems were largely inefficient due to the flow rate of solvents being dependent on gravity. Separations took many hours, and sometimes days to complete. Gas chromatography (GC) at the time was more powerful than liquid chromatography (LC), however, it was obvious that gas phase separation and analysis of very polar high molecular weight biopolymers was impossible. GC was ineffective for many life science and health applications for biomolecules, because they are mostly non-volatile and thermally unstable at the high temperatures of GC. As a result, alternative methods were hypothesized which would soon result in the development of HPLC.
Following on the seminal work of Martin and Synge in 1941, it was predicted by Calvin Giddings, Josef Huber, and others in the 1960s that LC could be operated in the high-efficiency mode by reducing the packing-particle diameter substantially below the typical LC (and GC) level of 150 μm and using pressure to increase the mobile phase velocity. These predictions underwent extensive experimentation and refinement throughout the 60s into the 70s until these very days. Early developmental research began to improve LC particles, for example the historic Zipax, a superficially porous particle.
The 1970s brought about many developments in hardware and instrumentation. Researchers began using pumps and injectors to make a rudimentary design of an HPLC system. Gas amplifier pumps were ideal because they operated at constant pressure and did not require leak-free seals or check valves for steady flow and good quantitation. Hardware milestones were made at Dupont IPD (Industrial Polymers Division) such as a low-dwell-volume gradient device being utilized as well as replacing the septum injector with a loop injection valve.
While instrumentation developments were important, the history of HPLC is primarily about the history and evolution of particle technology. After the introduction of porous layer particles, there has been a steady trend to reduced particle size to improve efficiency. However, by decreasing particle size, new problems arose. The practical disadvantages stem from the excessive pressure drop needed to force mobile fluid through the column and the difficulty of preparing a uniform packing of extremely fine materials. Every time particle size is reduced significantly, another round of instrument development usually must occur to handle the pressure.
Types
Partition chromatography
Partition chromatography was one of the first kinds of chromatography that chemists developed, and is barely used these days. The partition coefficient principle has been applied in paper chromatography, thin layer chromatography, gas phase and liquid–liquid separation applications. The 1952 Nobel Prize in chemistry was earned by Archer John Porter Martin and Richard Laurence Millington Synge for their development of the technique, which was used for their separation of amino acids. Partition chromatography uses a retained solvent, on the surface or within the grains or fibers of an "inert" solid supporting matrix as with paper chromatography; or takes advantage of some coulombic and/or hydrogen donor interaction with the stationary phase. Analyte molecules partition between a liquid stationary phase and the eluent. Just as in hydrophilic interaction chromatography (HILIC; a sub-technique within HPLC), this method separates analytes based on differences in their polarity. HILIC most often uses a bonded polar stationary phase and a mobile phase made primarily of acetonitrile with water as the strong component. Partition HPLC has been used historically on unbonded silica or alumina supports. Each works effectively for separating analytes by relative polar differences. HILIC bonded phases have the advantage of separating acidic, basic and neutral solutes in a single chromatographic run.
The polar analytes diffuse into a stationary water layer associated with the polar stationary phase and are thus retained. The stronger the interactions between the polar analyte and the polar stationary phase (relative to the mobile phase) the longer the elution time. The interaction strength depends on the functional groups part of the analyte molecular structure, with more polarized groups (e.g., hydroxyl-) and groups capable of hydrogen bonding inducing more retention. Coulombic (electrostatic) interactions can also increase retention. Use of more polar solvents in the mobile phase will decrease the retention time of the analytes, whereas more hydrophobic solvents tend to increase retention times.
Normal–phase chromatography
Normal–phase chromatography was one of the first kinds of HPLC that chemists developed, but has decreased in use over the last decades. Also known as normal-phase HPLC (NP-HPLC), this method separates analytes based on their affinity for a polar stationary surface such as silica; hence it is based on analyte ability to engage in polar interactions (such as hydrogen-bonding or dipole-dipole type of interactions) with the sorbent surface. NP-HPLC uses a non-polar, non-aqueous mobile phase (e.g., chloroform), and works effectively for separating analytes readily soluble in non-polar solvents. The analyte associates with and is retained by the polar stationary phase. Adsorption strengths increase with increased analyte polarity. The interaction strength depends not only on the functional groups present in the structure of the analyte molecule, but also on steric factors. The effect of steric hindrance on interaction strength allows this method to resolve (separate) structural isomers.
The use of more polar solvents in the mobile phase will decrease the retention time of analytes, whereas more hydrophobic solvents tend to induce slower elution (increased retention times). Very polar solvents such as traces of water in the mobile phase tend to adsorb to the solid surface of the stationary phase forming a stationary bound (water) layer which is considered to play an active role in retention. This behavior is somewhat peculiar to normal phase chromatography because it is governed almost exclusively by an adsorptive mechanism (i.e., analytes interact with a solid surface rather than with the solvated layer of a ligand attached to the sorbent surface; see also reversed-phase HPLC below). Adsorption chromatography is still somewhat used for structural isomer separations in both column and thin-layer chromatography formats on activated (dried) silica or alumina supports.
Partition- and NP-HPLC fell out of favor in the 1970s with the development of reversed-phase HPLC because of poor reproducibility of retention times due to the presence of a water or protic organic solvent layer on the surface of the silica or alumina chromatographic media. This layer changes with any changes in the composition of the mobile phase (e.g., moisture level) causing drifting retention times.
Recently, partition chromatography has become popular again with the development of Hilic bonded phases which demonstrate improved reproducibility, and due to a better understanding of the range of usefulness of the technique.
Displacement chromatography
The use of displacement chromatography is rather limited, and is mostly used for preparative chromatography. The basic principle is based on a molecule with a high affinity for the chromatography matrix (the displacer) which is used to compete effectively for binding sites, and thus displace all molecules with lesser affinities.
There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired in order to achieve maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentration.
Reversed-phase liquid chromatography (RP-LC)
Reversed phase HPLC (RP-HPLC) is the most widespread mode of chromatography. It has a non-polar stationary phase and an aqueous, moderately polar mobile phase. In the reversed phase methods, the substances are retained in the system the more hydrophobic they are. For the retention of organic materials, the stationary phases, packed inside the columns, are consisted mainly of porous granules of silica gel in various shapes, mainly spherical, at different diameters (1.5, 2, 3, 5, 7, 10 um), with varying pore diameters (60, 100, 150, 300, A), on whose surface are chemically bound various hydrocarbon ligands such as C3, C4, C8, C18. There are also polymeric hydrophobic particles that serve as stationary phases, when solutions at extreme pH are needed, or hybrid silica, polymerized with organic substances. The longer the hydrocarbon ligand on the stationary phase, the longer the sample components can be retained. Most of the current methods of separation of biomedical materials use C-18 type of columns, sometimes called by a trade names such as ODS (octadecylsilane) or RP-18 (Reversed Phase 18).
The most common RP stationary phases are based on a silica support, which is surface-modified by bonding RMe2SiCl, where R is a straight chain alkyl group such as C18H37 or C8H17.
With such stationary phases, retention time is longer for lipophylic molecules, whereas polar molecules elute more readily (emerge early in the analysis). A chromatographer can increase retention times by adding more water to the mobile phase, thereby making the interactions of the hydrophobic analyte with the hydrophobic stationary phase relatively stronger. Similarly, an investigator can decrease retention time by adding more organic solvent to the mobile phase. RP-HPLC is so commonly used among the biologists and life science users, therefore it is often incorrectly referred to as just "HPLC" without further specification. The pharmaceutical industry also regularly employs RP-HPLC to qualify drugs before their release.
RP-HPLC operates on the principle of hydrophobic interactions, which originates from the high symmetry in the dipolar water structure and plays the most important role in all processes in life science. RP-HPLC allows the measurement of these interactive forces. The binding of the analyte to the stationary phase is proportional to the contact surface area around the non-polar segment of the analyte molecule upon association with the ligand on the stationary phase. This solvophobic effect is dominated by the force of water for "cavity-reduction" around the analyte and the C18-chain versus the complex of both. The energy released in this process is proportional to the surface tension of the eluent (water: 7.3 J/cm2, methanol: 2.2 J/cm2) and to the hydrophobic surface of the analyte and the ligand respectively. The retention can be decreased by adding a less polar solvent (methanol, acetonitrile) into the mobile phase to reduce the surface tension of water. Gradient elution uses this effect by automatically reducing the polarity and the surface tension of the aqueous mobile phase during the course of the analysis.
Structural properties of the analyte molecule can play an important role in its retention characteristics. In theory, an analyte with a larger hydrophobic surface area (C–H, C–C, and generally non-polar atomic bonds, such as S-S and others) can be retained longer as it does not interact with the water structure. On the other hand, analytes with higher polar surface area (as a result of the presence of polar groups, such as -OH, -NH2, COO− or -NH3+ in their structure) are less retained, as they are better integrated into water. The interactions with the stationary phase can also affected by steric effects, or exclusion effects, whereby a component of very large molecule may have only restricted access to the pores of the stationary phase, where the interactions with surface ligands (alkyl chains) take place. Such surface hindrance typically results in less retention.
Retention time increases with more hydrophobic (non-polar) surface area of the molecules. For example, branched chain compounds can elute more rapidly than their corresponding linear isomers because their overall surface area is lower. Similarly organic compounds with single C–C bonds frequently elute later than those with a C=C or even triple bond, as the double or triple bond makes the molecule more compact than a single C–C bond.
Another important factor is the mobile phase pH since it can change the hydrophobic character of the ionizable analyte. For this reason most methods use a buffering agent, such as sodium phosphate, to control the pH. Buffers serve multiple purposes: control of pH which affects the ionization state of the ionizable analytes, affect the charge upon the ionizable silica surface of the stationary phase in between the bonded phase linands, and in some cases even act as ion pairing agents to neutralize analyte charge. Ammonium formate is commonly added in mass spectrometry to improve detection of certain analytes by the formation of analyte-ammonium adducts. A volatile organic acid such as acetic acid, or most commonly formic acid, is often added to the mobile phase if mass spectrometry is used to analyze the column effluents.
Trifluoroacetic acid (TFA) as additive to the mobile phase is widely used for complex mixtures of biomedical samples, mostly peptides and proteins, using mostly UV based detectors. They are rarely used in mass spectrometry methods, due to residues it can leave in the detector and solvent delivery system, which interfere with the analysis and detection. However, TFA can be highly effective in improving retention of analytes such as carboxylic acids, in applications utilizing other detectors such as UV-VIS, as it is a fairly strong organic acid. The effects of acids and buffers vary by application but generally improve chromatographic resolution when dealing with ionizable components.
Reversed phase columns are quite difficult to damage compared to normal silica columns, thanks to the shielding effect of the bonded hydrophobic ligands; however, most reversed phase columns consist of alkyl derivatized silica particles, and are prone to hydrolysis of the silica at extreme pH conditions in the mobile phase. Most types of RP columns should not be used with aqueous bases as these will hydrolyze the underlying silica particle and dissolve it. There are selected brands of hybrid or enforced silica based particles of RP columns which can be used at extreme pH conditions. The use of extreme acidic conditions is also not recommended, as they also might hydrolyzed as well as corrode the inside walls of the metallic parts of the HPLC equipment.
As a rule, in most cases RP-HPLC columns should be flushed with clean solvent after use to remove residual acids or buffers, and stored in an appropriate composition of solvent. Some biomedical applications require non metallic environment for the optimal separation. For such sensitive cases there is a test for the metal content of a column is to inject a sample which is a mixture of 2,2'- and 4,4'-bipyridine. Because the 2,2'-bipy can chelate the metal, the shape of the peak for the 2,2'-bipy will be distorted (tailed) when metal ions are present on the surface of the silica...
Size-exclusion chromatography
Size-exclusion chromatography (SEC) separates polymer molecules and biomolecules based on differences in their molecular size (actually by a particle's Stokes radius). The separation process is based on the ability of sample molecules to permeate through the pores of gel spheres, packed inside the column, and is dependent on the relative size of analyte molecules and the respective pore size of the absorbent. The process also relies on the absence of any interactions with the packing material surface.
Two types of SEC are usually termed:
Gel permeation chromatography (GPC)—separation of synthetic polymers (aqueous or organic soluble). GPC is a powerful technique for polymer characterization using primarily organic solvents.
Gel filtration chromatography (GFC)—separation of water-soluble biopolymers. GFC uses primarily aqueous solvents (typically for aqueous soluble biopolymers, such as proteins, etc.).
The separation principle in SEC is based on the fully, or partially penetrating of the high molecular weight substances of the sample into the porous stationary-phase particles during their transport through column. The mobile-phase eluent is selected in such a way that it totally prevents interactions with the stationary phase's surface. Under these conditions, the smaller the size of the molecule, the more it is able to penetrate inside the pore space and the movement through the column takes longer. On the other hand, the bigger the molecular size, the higher the probability the molecule will not fully penetrate the pores of the stationary phase, and even travel around them, thus, will be eluted earlier. The molecules are separated in order of decreasing molecular weight, with the largest molecules eluting from the column first and smaller molecules eluting later. Molecules larger than the pore size do not enter the pores at all, and elute together as the first peak in the chromatogram and this is called total exclusion volume which defines the exclusion limit for a particular column. Small molecules will permeate fully through the pores of the stationary phase particles and will be eluted last, marking the end of the chromatogram, and may appear as a total penetration marker.
In biomedical sciences it is generally considered as a low resolution chromatography and thus it is often reserved for the final, "polishing" step of the purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins. SEC is used primarily for the analysis of large molecules such as proteins or polymers. SEC works also in a preparative way by trapping the smaller molecules in the pores of a particles. The larger molecules simply pass by the pores as they are too large to enter the pores. Larger molecules therefore flow through the column quicker than smaller molecules: that is, the smaller the molecule, the longer the retention time.
This technique is widely used for the molecular weight determination of polysaccharides. SEC is the official technique (suggested by European pharmacopeia) for the molecular weight comparison of different commercially available low-molecular weight heparins.
Ion-exchange chromatography
Ion-exchange chromatography (IEC) or ion chromatography (IC) is an analytical technique for the separation and determination of ionic solutes in aqueous samples from environmental and industrial origins such as metal industry, industrial waste water, in biological systems, pharmaceutical samples, food, etc. Retention is based on the attraction between solute ions and charged sites bound to the stationary phase. Solute ions charged the same as the ions on the column are repulsed and elute without retention, while solute ions charged oppositely to the charged sites of the column are retained on it. Solute ions that are retained on the column can be eluted from it by changing the mobile phase composition, such as increasing its salt concentration and pH or increasing the column temperature, etc.
Types of ion exchangers include polystyrene resins, cellulose and dextran ion exchangers (gels), and controlled-pore glass or porous silica gel. Polystyrene resins allow cross linkage, which increases the stability of the chain. Higher cross linkage reduces swerving, which increases the equilibration time and ultimately improves selectivity. Cellulose and dextran ion exchangers possess larger pore sizes and low charge densities making them suitable for protein separation.
In general, ion exchangers favor the binding of ions of higher charge and smaller radius.
An increase in counter ion (with respect to the functional groups in resins) concentration reduces the retention time, as it creates a strong competition with the solute ions. A decrease in pH reduces the retention time in cation exchange while an increase in pH reduces the retention time in anion exchange. By lowering the pH of the solvent in a cation exchange column, for instance, more hydrogen ions are available to compete for positions on the anionic stationary phase, thereby eluting weakly bound cations.
This form of chromatography is widely used in the following applications: water purification, preconcentration of trace components, ligand-exchange chromatography, ion-exchange chromatography of proteins, high-pH anion-exchange chromatography of carbohydrates and oligosaccharides, and others.
Bioaffinity chromatography
High performance affinity chromatography (HPAC) works by passing a sample solution through a column packed with a stationary phase that contains an immobilized biologically active ligand. The ligand is in fact a substrate that has a specific binding affinity for the target molecule in the sample solution. The target molecule binds to the ligand, while the other molecules in the sample solution pass through the column, having little or no retention. The target molecule is then eluted from the column using a suitable elution buffer.
This chromatographic process relies on the capability of the bonded active substances to form stable, specific, and reversible complexes thanks to their biological recognition of certain specific sample components. The formation of these complexes involves the participation of common molecular forces such as the Van der Waals interaction, electrostatic interaction, dipole-dipole interaction, hydrophobic interaction, and the hydrogen bond. An efficient, biospecific bond is formed by a simultaneous and concerted action of several of these forces in the complementary binding sites.
Aqueous normal-phase chromatography
Aqueous normal-phase chromatography (ANP) is also called hydrophilic interaction liquid chromatography (HILIC). This is a chromatographic technique which encompasses the mobile phase region between reversed-phase chromatography (RP) and organic normal phase chromatography (ONP). HILIC is used to achieve unique selectivity for hydrophilic compounds, showing normal phase elution order, using "reversed-phase solvents", i.e., relatively polar mostly non-aqueous solvents in the mobile phase. Many biological molecules, especially those found in biological fluids, are small polar compounds that do not retain well by reversed phase-HPLC. This has made hydrophilic interaction LC (HILIC) an attractive alternative and useful approach for analysis of polar molecules. Additionally, because HILIC is routinely used with traditional aqueous mixtures with polar organic solvents such as ACN and methanol, it can be easily coupled to MS.
Isocratic and gradient elution
A separation in which the mobile phase composition remains constant throughout the procedure is termed isocratic (meaning constant composition). The word was coined by Csaba Horvath who was one of the pioneers of HPLC.
The mobile phase composition does not have to remain constant. A separation in which the mobile phase composition is changed during the separation process is described as a gradient elution. For example, a gradient can start at 10% methanol in water, and end at 90% methanol in water after 20 minutes. The two components of the mobile phase are typically termed "A" and "B"; A is the "weak" solvent which allows the solute to elute only slowly, while B is the "strong" solvent which rapidly elutes the solutes from the column. In reversed-phase chromatography, solvent A is often water or an aqueous buffer, while B is an organic solvent miscible with water, such as acetonitrile, methanol, THF, or isopropanol.
In isocratic elution, peak width increases with retention time linearly according to the equation for N, the number of theoretical plates. This can be a major disadvantage when analyzing a sample that contains analytes with a wide range of retention factors. Using a weaker mobile phase, the runtime is lengthened and results in slowly eluting peaks to be broad, leading to reduced sensitivity. A stronger mobile phase would improve issues of runtime and broadening of later peaks but results in diminished peak separation, especially for quickly eluting analytes which may have insufficient time to fully resolve. This issue is addressed through the changing mobile phase composition of gradient elution.
By starting from a weaker mobile phase and strengthening it during the runtime, gradient elution decreases the retention of the later-eluting components so that they elute faster, giving narrower (and taller) peaks for most components, while also allowing for the adequate separation of earlier-eluting components. This also improves the peak shape for tailed peaks, as the increasing concentration of the organic eluent pushes the tailing part of a peak forward. This also increases the peak height (the peak looks "sharper"), which is important in trace analysis. The gradient program may include sudden "step" increases in the percentage of the organic component, or different slopes at different times – all according to the desire for optimum separation in minimum time.
In isocratic elution, the retention order does not change if the column dimensions (length and inner diameter) change – that is, the peaks elute in the same order. In gradient elution, however, the elution order may change as the dimensions or flow rate change. if they are no scaled down or up according to the change
The driving force in reversed phase chromatography originates in the high order of the water structure. The role of the organic component of the mobile phase is to reduce this high order and thus reduce the retarding strength of the aqueous component.
Parameters
Theoretical
The theory of high performance liquid chromatography-HPLC is, at its core, the same as general chromatography theory. This theory has been used as the basis for system-suitability tests, as can be seen in the USP Pharmacopeia, which are a set of quantitative criteria, which test the suitability of the HPLC system to the required analysis at any step of it.
This relation is also represented as a normalized unit-less factor known as the retention factor, or retention parameter, which is the experimental measurement of the capacity ratio, as shown in the Figure of Performance Criteria as well. tR is the retention time of the specific component and t0 is the time it takes for a non-retained substance to elute through the system without any retention, thus it is called the Void Time.
The ratio between the retention factors, k', of every two adjacent peaks in the chromatogram is used in the evaluation of the degree of separation between them, and is called selectivity factor, α, as shown in the Performance Criteria graph.
The plate count N as a criterion for system efficiency was developed for isocratic conditions, i.e., a constant mobile phase composition throughout the run. In gradient conditions, where the mobile phase changes with time during the chromatographic run, it is more appropriate to use the parameter peak capacity Pc as a measure for the system efficiency. The definition of peak capacity in chromatography is the number of peaks that can be separated within a retention window for a specific pre-defined resolution factor, usually ~1. It could also be envisioned as the runtime measured in number of peaks' average widths. The equation is shown in the Figure of the performance criteria. In this equation tg is the gradient time and w(ave) is the average peaks width at the base.
The parameters are largely derived from two sets of chromatographic theory: plate theory (as part of partition chromatography), and the rate theory of chromatography / Van Deemter equation. Of course, they can be put in practice through analysis of HPLC chromatograms, although rate theory is considered the more accurate theory.
They are analogous to the calculation of retention factor for a paper chromatography separation, but describes how well HPLC separates a mixture into two or more components that are detected as peaks (bands) on a chromatogram. The HPLC parameters are the: efficiency factor(N), the retention factor (kappa prime), and the separation factor (alpha). Together the factors are variables in a resolution equation, which describes how well two components' peaks separated or overlapped each other. These parameters are mostly only used for describing HPLC reversed phase and HPLC normal phase separations, since those separations tend to be more subtle than other HPLC modes (e.g., ion exchange and size exclusion).
Void volume is the amount of space in a column that is occupied by solvent. It is the space within the column that is outside of the column's internal packing material. Void volume is measured on a chromatogram as the first component peak detected, which is usually the solvent that was present in the sample mixture; ideally the sample solvent flows through the column without interacting with the column, but is still detectable as distinct from the HPLC solvent. The void volume is used as a correction factor.
Efficiency factor (N) practically measures how sharp component peaks on the chromatogram are, as ratio of the component peak's area ("retention time") relative to the width of the peaks at their widest point (at the baseline). Peaks that are tall, sharp, and relatively narrow indicate that separation method efficiently removed a component from a mixture; high efficiency. Efficiency is very dependent upon the HPLC column and the HPLC method used. Efficiency factor is synonymous with plate number, and the 'number of theoretical plates'.
Retention factor (kappa prime) measures how long a component of the mixture stuck to the column, measured by the area under the curve of its peak in a chromatogram (since HPLC chromatograms are a function of time). Each chromatogram peak will have its own retention factor (e.g., kappa1 for the retention factor of the first peak). This factor may be corrected for by the void volume of the column.
Separation factor (alpha) is a relative comparison on how well two neighboring components of the mixture were separated (i.e., two neighboring bands on a chromatogram). This factor is defined in terms of a ratio of the retention factors of a pair of neighboring chromatogram peaks, and may also be corrected for by the void volume of the column. The greater the separation factor value is over 1.0, the better the separation, until about 2.0 beyond which an HPLC method is probably not needed for separation.
Resolution equations relate the three factors such that high efficiency and separation factors improve the resolution of component peaks in an HPLC separation.
Internal diameter
The internal diameter (ID) of an HPLC column is an important parameter. It can influence the detection response when reduced due to the reduced lateral diffusion of the solute band. It can also affect the separation selectivity, when flow rate and injection volumes are not scaled down or up proportionally to the smaller or larger diameter used, both in the isocratic and in gradient modes. It determines the quantity of analyte that can be loaded onto the column. Larger diameter columns are usually seen in preparative applications, such as the purification of a drug product for later use. Low-ID columns have improved sensitivity and lower solvent consumption in the recent ultra-high performance liquid chromatography (UHPLC).
Larger ID columns (over 10 mm) are used to purify usable amounts of material because of their large loading capacity.
Analytical scale columns (4.6 mm) have been the most common type of columns, though narrower columns are rapidly gaining in popularity. They are used in traditional quantitative analysis of samples and often use a UV-Vis absorbance detector.
Narrow-bore columns (1–2 mm) are used for applications when more sensitivity is desired either with special UV-vis detectors, fluorescence detection or with other detection methods like liquid chromatography-mass spectrometry
Capillary columns (under 0.3 mm) are used almost exclusively with alternative detection means such as mass spectrometry. They are usually made from fused silica capillaries, rather than the stainless steel tubing that larger columns employ.
Particle size
Most traditional HPLC is performed with the stationary phase attached to the outside of small spherical silica particles (very small beads). These particles come in a variety of sizes with 5 μm beads being the most common. Smaller particles generally provide more surface area and better separations, but the pressure required for optimum linear velocity increases by the inverse of the particle diameter squared.
According to the equations of the column velocity, efficiency and backpressure, reducing the particle diameter by half and keeping the size of the column the same, will double the column velocity and efficiency; but four times increase the backpressure. And the small particles HPLC also can decrease the width broadening. Larger particles are used in preparative HPLC (column diameters 5 cm up to >30 cm) and for non-HPLC applications such as solid-phase extraction.
Pore size
Many stationary phases are porous to provide greater surface area. Small pores provide greater surface area while larger pore size has better kinetics, especially for larger analytes. For example, a protein which is only slightly smaller than a pore might enter the pore but does not easily leave once inside.
Pump pressure
Pumps vary in pressure capacity, but their performance is measured on their ability to yield a consistent and reproducible volumetric flow rate. Pressure may reach as high as 60 MPa (6000 lbf/in2), or about 600 atmospheres. Modern HPLC systems have been improved to work at much higher pressures, and therefore are able to use much smaller particle sizes in the columns (<2 μm). These "ultra high performance liquid chromatography" systems or UHPLCs, which could also be known as ultra high pressure chromatography systems, can work at up to 120 MPa (17,405 lbf/in2), or about 1200 atmospheres. The term "UPLC" is a trademark of the Waters Corporation, but is sometimes used to refer to the more general technique of UHPLC.
Detectors
HPLC detectors fall into two main categories: universal or selective. Universal detectors typically measure a bulk property (e.g., refractive index) by measuring a difference of a physical property between the mobile phase and mobile phase with solute while selective detectors measure a solute property (e.g., UV-Vis absorbance) by simply responding to the physical or chemical property of the solute. HPLC most commonly uses a UV-Vis absorbance detector; however, a wide range of other chromatography detectors can be used. A universal detector that complements UV-Vis absorbance detection is the charged aerosol detector (CAD). A kind of commonly utilized detector includes refractive index detectors, which provide readings by measuring the changes in the refractive index of the eluant as it moves through the flow cell. In certain cases, it is possible to use multiple detectors, for example LCMS normally combines UV-Vis with a mass spectrometer.
When used with an electrochemical detector (ECD) the HPLC-ECD selectively detects neurotransmitters such as: norepinephrine, dopamine, serotonin, glutamate, GABA, acetylcholine and others in neurochemical analysis research applications. The HPLC-ECD detects neurotransmitters to the femtomolar range. Other methods to detect neurotransmitters include liquid chromatography-mass spectrometry, ELISA, or radioimmunoassays.
Autosamplers
Large numbers of samples can be automatically injected onto an HPLC system, by the use of HPLC autosamplers. In addition, HPLC autosamplers have an injection volume and technique which is exactly the same for each injection, consequently they provide a high degree of injection volume precision.
It is possible to enable sample stirring within the sampling-chamber, thus promoting homogeneity.
Applications
Manufacturing
HPLC has many applications in both laboratory and clinical science. It is a common technique used in pharmaceutical development, as it is a dependable way to obtain and ensure product purity. While HPLC can produce extremely high quality (pure) products, it is not always the primary method used in the production of bulk drug materials. According to the European pharmacopoeia, HPLC is used in only 15.5% of syntheses. However, it plays a role in 44% of syntheses in the United States pharmacopoeia. This could possibly be due to differences in monetary and time constraints, as HPLC on a large scale can be an expensive technique. An increase in specificity, precision, and accuracy that occurs with HPLC unfortunately corresponds to an increase in cost.
Legal
This technique is also used for detection of illicit drugs in various samples. The most common method of drug detection has been an immunoassay. This method is much more convenient. However, convenience comes at the cost of specificity and coverage of a wide range of drugs, therefore, HPLC has been used as well as an alternative method. As HPLC is a method of determining (and possibly increasing) purity, using HPLC alone in evaluating concentrations of drugs was somewhat insufficient. Therefore, HPLC in this context is often performed in conjunction with mass spectrometry. Using liquid chromatography-mass spectrometry (LC-MS) instead of gas chromatography-mass spectrometry (GC-MS) circumvents the necessity for derivitizing with acetylating or alkylation agents, which can be a burdensome extra step. LC-MS has been used to detect a variety of agents like doping agents, drug metabolites, glucuronide conjugates, amphetamines, opioids, cocaine, BZDs, ketamine, LSD, cannabis, and pesticides. Performing HPLC in conjunction with mass spectrometry reduces the absolute need for standardizing HPLC experimental runs.
Research
Similar assays can be performed for research purposes, detecting concentrations of potential clinical candidates like anti-fungal and asthma drugs. This technique is obviously useful in observing multiple species in collected samples, as well, but requires the use of standard solutions when information about species identity is sought out. It is used as a method to confirm results of synthesis reactions, as purity is essential in this type of research. However, mass spectrometry is still the more reliable way to identify species.
Medical and health sciences
Medical use of HPLC typically use mass spectrometer (MS) as the detector, so the technique is called LC-MS or LC-MS/MS for tandem MS, where two types of MS are operated sequentially. When the HPLC instrument is connected to more than one detector, it is called a hyphenated LC system. Pharmaceutical applications are the major users of HPLC, LC-MS and LC-MS/MS. This includes drug development and pharmacology, which is the scientific study of the effects of drugs and chemicals on living organisms, personalized medicine, public health and diagnostics. While urine is the most common medium for analyzing drug concentrations, blood serum is the sample collected for most medical analyses with HPLC. One of the most important roles of LC-MS and LC-MS/MS in the clinical lab is the Newborn Screening (NBS) for metabolic disorders and follow-up diagnostics. The infants' samples come in the shape of dried blood spot (DBS), which is simple to prepare and transport, enabling safe and accessible diagnostics, both locally and globally.
Other methods of detection of molecules that are useful for clinical studies have been tested against HPLC, namely immunoassays. In one example of this, competitive protein binding assays (CPBA) and HPLC were compared for sensitivity in detection of vitamin D. Useful for diagnosing vitamin D deficiencies in children, it was found that sensitivity and specificity of this CPBA reached only 40% and 60%, respectively, of the capacity of HPLC. While an expensive tool, the accuracy of HPLC is nearly unparalleled.
| Physical sciences | Chromatography | Chemistry |
168682 | https://en.wikipedia.org/wiki/Panavia%20Tornado | Panavia Tornado | The Panavia Tornado is a family of twin-engine, variable-sweep wing multi-role combat aircraft, jointly developed and manufactured by Italy, the United Kingdom and Germany. There are three primary Tornado variants: the Tornado IDS (interdictor/strike) fighter-bomber, the Tornado ECR (electronic combat/reconnaissance) SEAD aircraft and the Tornado ADV (air defence variant) interceptor aircraft.
The Tornado was developed and built by Panavia Aircraft GmbH, a tri-national consortium consisting of British Aerospace (previously British Aircraft Corporation), MBB of West Germany, and Aeritalia of Italy. It first flew on 14 August 1974 and was introduced into service in 1979–1980. Due to its multirole design, it was able to replace several different types of aircraft in the adopting air forces. The Royal Saudi Air Force (RSAF) became the only export operator of the Tornado, in addition to the three original partner nations. A training and evaluation unit operating from RAF Cottesmore, the Tri-National Tornado Training Establishment, maintained a level of international co-operation beyond the production stage.
The Tornado was operated by the Royal Air Force (RAF), Italian Air Force, and RSAF during the Gulf War of 1991, in which the Tornado conducted many low-altitude penetrating strike missions. The Tornados of various services were also used in the Bosnian War, Kosovo War, Iraq War, in Libya during the 2011 Libyan civil war, as well as smaller roles in Afghanistan, Yemen, and Syria. Including all variants, 990 aircraft were built.
Development
Origins
During the 1960s, aeronautical designers looked to variable-geometry wing designs to gain the maneuverability and efficient cruise of straight wings with the speed of swept wing designs. The United Kingdom had cancelled the procurement of the BAC TSR-2 tactical strike and reconnaissance aircraft in 1965 and then -in 1967 - the US General Dynamics F-111K aircraft that was supposed to fulfil the same role, and was still looking for a replacement for its Avro Vulcan strategic bomber and Blackburn Buccaneer strike aircraft. Britain and France had initiated the BAC/Dassault AFVG (from "Anglo-French Variable Geometry") project in 1965, but this had ended with French withdrawal in 1967. Britain continued to develop a variable-geometry aircraft similar to the proposed AFVG, and sought new partners to achieve this. West German EWR with Boeing then with Fairchild-Hiller and Republic Aviation had been developing design studies of the swing-wing EWR-Fairchild-Hiller A400 AVS Advanced Vertical Strike (which has a similar configuration to the Tornado) from 1964 to 1968.
In 1968, West Germany, the Netherlands, Belgium, Italy and Canada formed a working group to examine replacements for the Lockheed F-104G Starfighter multi-role fighter-bomber, initially called the Multi Role Aircraft (MRA), later renamed as the Multi Role Combat Aircraft (MRCA). As the partner nations' requirements were so diverse, it was decided to develop a single aircraft that could perform a variety of missions that were previously undertaken by a fleet of different aircraft. Britain joined the MRCA group in 1968, represented by Air Vice-Marshal Michael Giddings, and a memorandum of agreement was drafted between Britain, West Germany, and Italy in May 1969.
By the end of 1968, the prospective purchases from the six countries amounted to 1,500 aircraft. Canada and Belgium had departed before any long-term commitments had been made to the programme; Canada had found the project politically unpalatable; there was a perception in political circles that much of the manufacturing and specifications were focused on Western Europe. France had made a favourable offer to Belgium on the Dassault Mirage 5.
Panavia Aircraft GmbH
On 26 March 1969, four partner nations – United Kingdom, Germany, Italy and the Netherlands, agreed to form a multinational company, Panavia Aircraft GmbH, to develop and manufacture the MRCA. The project's aim was to produce an aircraft capable of undertaking missions in the tactical strike, reconnaissance, air defence, and maritime roles. Various concepts, including alternative fixed-wing and single-engine designs, were studied while defining the aircraft. The Netherlands pulled out of the project in 1970, citing that the aircraft was too complicated and technical for the RNLAF's preferences, which had sought a simpler aircraft with outstanding manoeuvrability. An additional blow was struck when the German requirement reduced from an initial 600 aircraft to 324 in 1972. It has been suggested that Germany deliberately placed an unrealistically high initial order to secure the company headquarters and initial test flight in Germany rather than the UK, to have a bigger design influence.
When the agreement was finalised, the United Kingdom and West Germany each had a 42.5% stake of the workload, with the remaining 15% going to Italy; this division of the production work was heavily influenced by international political bargaining. The front fuselage and tail assembly was assigned to BAC (now BAE Systems) in the United Kingdom; the centre fuselage to MBB (now part of Airbus) in West Germany; and the wings to Aeritalia (now Leonardo) in Italy. Similarly, tri-national worksharing was used for engines and equipment. A separate multinational company, Turbo-Union, was formed in June 1970 to develop and build the RB199 engines for the aircraft, with ownership split 40% Rolls-Royce, 40% MTU, and 20% FIAT.
At the conclusion of the project definition phase in May 1970, the concepts were reduced to two designs; a single seat Panavia 100 which West Germany initially preferred, and the twin-seat Panavia 200 which the RAF preferred. The aircraft was briefly called the Panavia Panther, and the project soon coalesced towards the two-seat option. In September 1971, the three governments signed an Intention to Proceed (ITP) document, at which point the aircraft was intended solely for the low-level strike mission, where it was viewed as a viable threat to Soviet defences in that role. It was at this point that Britain's Chief of the Defence Staff announced, "two-thirds of the fighting front line will be composed of this single, basic aircraft type".
Prototypes and testing
The first of fifteen development aircraft (nine prototypes, P01 to P09, and six pre-series, PS11 to PS 16) flew on 14 August 1974 at Manching, Germany; the pilot, Paul Millett described his experience: "Aircraft handling was delightful... the actual flight went so smoothly that I did begin to wonder whether this was not yet another simulation". Flight testing led to the need for minor modifications. Airflow disturbances were corrected by re-profiling the engine intakes and the fuselage to minimise surging and buffeting at supersonic speeds.
According to Jim Quinn, programmer of the Tornado development simulation software and engineer on the Tornado engine and engine controls, the prototype was safely capable of reaching supercruise, but the engines had severe safety issues at high altitude while trying to decelerate. At high altitude and low turbine speed the compressor did not provide enough pressure to hold back the combustion pressure and would result in a violent vibration as the combustion pressure backfired into the intake. To avoid this effect the engine controls would automatically increase the minimum idle setting as altitude increased, until at very high altitudes the idle setting was so high, however, that it was close to maximum dry thrust. This resulted in one of the test aircraft being stuck in a mach 1.2 supercruise at high altitude and having to reduce speed by turning the aircraft, because the idle setting at that altitude was so high that the aircraft could not decelerate.
Testing revealed that a nose-wheel steering augmentation system, connecting with the yaw damper, was necessary to counteract the destabilising effect produced by deploying the thrust reverser during the landing roll.
From 1967 until 1984 Soviet KGB agents were provided details on the Tornado by the head of the West German Messerschmitt-Bölkow-Blohm Planning department, Manfred Rotsch.
Two prototypes were lost in accidents, both of which had been primarily caused by poor piloting decisions and errors leading to two ground collision incidents; a third Tornado prototype was seriously damaged by an incident involving pilot-induced pitch oscillation. During the type's development, aircraft designers of the era were beginning to incorporate features such as more sophisticated stability augmentation systems and autopilots. Aircraft such as the Tornado and the General Dynamics F-16 Fighting Falcon made use of these new technologies. Failure testing of the Tornado's triplex analogue command and stability augmentation system (CSAS) was conducted on a series of realistic flight control rigs; the variable-sweep wings in combination with varying, and frequently very heavy, payloads complicated the clearance process.
Production
The contract for the Batch 1 aircraft was signed on 29 July 1976. The first flight of a production aircraft was on 10 July 1979 by ZA319 at BAe Warton. The first aircraft were delivered to the RAF and German Air Force on 5 and 6 June 1979 respectively. The first Italian Tornado was delivered on 25 September 1981. On 29 January 1981, the Tri-National Tornado Training Establishment (TTTE) officially opened at RAF Cottesmore, remaining active in training pilots from all operating nations until 31 March 1999. The 500th Tornado to be produced was delivered to West Germany on 19 December 1987.
Export customers were sought after West Germany withdrew its objections to exporting the aircraft; Saudi Arabia was the only export customer of the Tornado. The agreement to purchase the Tornado was part of the Al-Yamamah arms deal between British Aerospace and the Saudi government. Oman had committed to purchasing eight Tornado F2s and the equipment to operate them for a total value of £250 million in August 1985, but cancelled the order in 1990 due to financial difficulties.
During the 1970s, Australia considered joining the MRCA programme to find a replacement for their ageing Dassault Mirage IIIs; ultimately the McDonnell Douglas F/A-18 Hornet was selected to meet the requirement. Canada similarly opted for the F/A-18 after considering the Tornado. Japan considered the Tornado in the 1980s, along with the F-16 and F/A-18, before selecting the Mitsubishi F-2. In the 1990s, both Taiwan and South Korea expressed interest in acquiring a small number of Tornado ECR aircraft. In 2001, EADS proposed a Tornado ECR variant with a greater electronic warfare capability for Australia.
Production came to an end in 1998; the last batch of aircraft produced going to the Royal Saudi Air Force, who had ordered a total of 96 IDS Tornados. In June 2011, it was announced that the Tornado fleet had flown collectively over one million flying hours. Aviation author Jon Lake noted that "The Trinational Panavia Consortium produced just short of 1,000 Tornados, making it one of the most successful postwar bomber programs". In 2008, AirForces Monthly said of the Tornado: "For more than a quarter of a century ... the most important military aircraft in Western Europe."
Design
Overview
The Panavia Tornado is a multirole, twin-engined aircraft designed to excel at low-level penetration of enemy defences. The mission envisaged during the Cold War was the delivery of conventional and nuclear ordnance on the invading forces of the Warsaw Pact countries of Eastern Europe; this dictated several significant features of the design. Variable wing geometry allowed for minimal drag during the low-level dash towards a well-prepared enemy. Advanced navigation and flight computers, including the then-innovative fly-by-wire system, greatly reduced the workload of the pilot during low-level flight and eased control of the aircraft. For long range missions, the Tornado has a retractable refuelling probe.
As a multirole aircraft, the Tornado is capable of undertaking more mission profiles than the anticipated strike mission; various operators replaced multiple aircraft types with the Tornado as a common type – the use of dedicated single role aircraft for specialist purposes such as battlefield reconnaissance, maritime patrol duties, or dedicated electronic countermeasures (ECM) were phased out – either by standard Tornados or modified variants, such as the Tornado ECR. The most extensive modification from the base Tornado design was the Tornado ADV, which was stretched and armed with long range anti-aircraft missiles to serve in the interceptor role.
Tornado operators have undertaken various life extension and upgrade programmes to keep their Tornado fleets as viable frontline aircraft. With these upgrades it is projected that the Tornado shall be in service until 2025, more than 50 years after the first prototype took flight.
Variable-sweep wing
In order for the Tornado to perform well as a low-level supersonic strike aircraft, it was considered necessary for it to possess good high-speed and low-speed flight characteristics. To achieve high-speed performance, a swept or delta wing is typically adopted, but these wing designs are inefficient at low speeds. To operate at both high and low speeds with great effectiveness, the Tornado uses a variable-sweep wing. This approach had been adopted by earlier aircraft, such as the American Grumman F-14 Tomcat, which is the most similar in mission flexibility. The swing-wing was also used by the older American General Dynamics F-111 Aardvark strike fighter, and the Soviet Mikoyan-Gurevich MiG-23 fighter. The smaller Tornado has many similarities with the F-111, however the Tornado differs in being a multi-role aircraft with more advanced onboard systems and avionics.
The level of wing sweep (i.e. the angle of the wings in relation to the fuselage) can be altered in flight at the pilot's control. The variable wing can adopt any sweep angle between 25 degrees and 67 degrees, with a corresponding speed range for each angle. Some Tornado ADVs were outfitted with an automatic wing-sweep system to reduce pilot workload. When the wings are swept back, the exposed wing area is lowered and drag is significantly decreased, which is conducive to performing high-speed low-level flight. The weapons pylons pivot with the angle of the variable-sweep wings so that the stores point in the direction of flight and do not hinder any wing positions.
In development, significant attention was given to the Tornado's short-field take-off and landing (STOL) performance. Germany, in particular, encouraged this design aspect. For shorter take-off and landing distances, the Tornado can sweep its wings forwards to the 25-degree position, and deploy its full-span flaps and leading edge slats to allow the aircraft to fly at lower speeds. These features, in combination with the thrust reverser-equipped engines, give the Tornado excellent low-speed handling and landing characteristics.
Avionics
The Tornado features a tandem-seat cockpit, crewed by a pilot and a navigator/weapons officer; both electromechanical and electro-optical controls are used to fly the aircraft and manage its systems. An array of dials and switches are mounted on either side of a centrally placed CRT monitor, controlling the navigational, communications, and weapons-control computers. BAE Systems developed the Tornado Advanced Radar Display Information System (TARDIS), a multi-function display, to replace the rear cockpit's Combined Radar and Projected Map Display; the RAF began installing TARDIS on the GR4 fleet in 2004.
The primary flight controls of the Tornado are a fly-by-wire hybrid, consisting of an analogue quadruplex Command and Stability Augmentation System (CSAS) connected to a digital Autopilot & Flight Director System (AFDS). In addition a level of mechanical reversion capacity was retained to safeguard against potential failure. To enhance pilot awareness, artificial feel was built into the flight controls, such as the centrally located stick. Because the Tornado's variable wings enable the aircraft to drastically alter its flight envelope, the artificial responses adjust automatically to wing profile changes and other changes to flight attitude. As a large variety of munitions and stores can be outfitted, the resulting changes to the aircraft's flight dynamics are routinely compensated for by the flight stability system.
The Tornado incorporates a combined navigation/attack Doppler radar that simultaneously scans for targets and conducts fully automated terrain-following for low-level flight operations. Being able to conduct all-weather hands-off low-level flight was considered one of the core advantages of the Tornado. The Tornado ADV had a different radar system to other variants, designated AI.24 Foxhunter, as it is designed for air defence operations. It was capable of tracking up to 20 targets at ranges of up to . The Tornado was one of the earliest aircraft to be fitted with a digital data bus for data transmission. A Link 16 JTIDS integration on the F3 variant enabled the exchange of radar and other sensory information with nearby friendly aircraft.
Some Tornado variants carry different avionics and equipment, depending on their mission. The Tornado ECR operated by Germany and Italy is devoted to Suppression of Enemy Air Defences (SEAD) missions. The Tornado ECR is equipped with an emitter-locator system (ELS) to detect radar use. German ECRs have a Honeywell infrared imaging system for reconnaissance flights. RAF and RSAF Tornados have the Laser Range Finder and Marked Target Seekers (LRMTS) for targeting laser-guided munitions. In 1991, the RAF introduced TIALD, allowing Tornado GR1s to laser-designate their own targets.
The GR1A and GR4A reconnaissance variants were equipped with TIRRS (Tornado Infrared Reconnaissance System), consisting of one SLIR (Sideways Looking Infra Red) sensor on each side of the fuselage forward of the engine intakes to capture oblique images, and a single IRLS (InfrarRed LineScan) sensor mounted on the fuselage's underside to provide vertical images. TIRRS recorded images on six S-VHS video tapes. The newer RAPTOR reconnaissance pod replaced the built-in TIRRS system.
Armament and equipment
The Tornado is cleared to carry the majority of air-launched weapons in the NATO inventory, including various unguided and laser-guided bombs, anti-ship and anti-radiation missiles, as well as specialised weapons such as anti-personnel mines and anti-runway munitions. To improve survivability in combat, the Tornado is equipped with onboard countermeasures, ranging from flare and chaff dispensers to electronic countermeasure pods that can be mounted under the wings. Underwing fuel tanks and a buddy store aerial refuelling system that allows one Tornado to refuel another are available to extend the aircraft's range.
In the decades since the Tornado's introduction, all of the Tornado operators have undertaken various upgrade and modification programmes to allow new weapons to be used by their squadrons. Amongst the armaments that the Tornado has been adapted to deploy are the Enhanced Paveway and Joint Direct Attack Munition bombs, and modern cruise missiles such as the Taurus and Storm Shadow missiles. These upgrades have increased the Tornado's capabilities and combat accuracy. Precision weapons such as cruise missiles have replaced older munitions such as cluster bombs.
Strike variants have a limited air-to-air capability with AIM-9 Sidewinder or AIM-132 ASRAAM air-to-air missiles (AAMs). The Tornado ADV was outfitted with beyond visual range AAMs such as the Skyflash and AIM-120 AMRAAM missiles. The Tornado is armed with two Mauser BK-27 revolver cannon internally mounted underneath the fuselage; the Tornado ADV was only armed with one cannon. When the RAF GR1 aircraft were converted to GR4, the FLIR sensor replaced the left hand cannon, leaving only one; the GR1A reconnaissance variant gave up both its guns to make space for the sideways looking infra-red sensors. The Mauser BK-27 was developed specifically for the Tornado, but has since been used on several other European fighters, such as the Dassault/Dornier Alpha Jet, Saab JAS 39 Gripen, and Eurofighter Typhoon.
The Tornado is capable of delivering air-launched nuclear weapons. In 1979, Britain considered replacing its Polaris submarines with either the Trident submarines or the Tornado as the main bearer of its nuclear deterrent. Although the UK proceeded with Trident, several Tornado squadrons based in Germany were assigned to SACEUR to deter a major Soviet offensive with both conventional and nuclear weapons, namely the WE.177 nuclear bomb, which was retired in 1998. German and Italian Tornados are capable of delivering US B61 nuclear bombs, which are made available through NATO.
Engine
Britain considered the selection of Rolls-Royce to develop the advanced engine for the MRCA to be essential, and was strongly opposed to adopting an engine from an American manufacturer, to the point where the UK might have withdrawn over the issue. In September 1969, Rolls-Royce's RB199 engine was selected to power the MRCA. One advantage over the US competition was that a technology transfer between the partner nations had been agreed; the engine was to be developed and manufactured by a joint company, Turbo-Union. The programme was delayed by Rolls-Royce's entry into receivership in 1971. however the nature of the multinational collaboration process helped avoid major disruption of the Tornado programme. Research from the supersonic airliner Concorde contributed to the development and final design of the RB199 and of the engine control units.
To operate efficiently across a wide range of conditions and speeds up to Mach 2, the RB199 and several other engines make use of variable intake ramps to control the air flow. The hydraulic system is pressurised by syphoning power from both or either operational engine; the hydraulics are completely contained within the airframe rather than integrating with the engine to improve safety and maintainability. In case of double-engine, or double-generator, failure, the Tornado has a single-use battery capable of operating the fuel pump and hydraulics for up to 13 minutes.
Relatively rarely among fighter aircraft, the RB199 is fitted with thrust reversers to decrease the distance required to land safely. To fully deploy the thrust reverser during landings, the yaw damper is connected to the steering of the nosewheel to provide greater stability.
In August 1974, the first RB199 powered flight of a prototype Tornado occurred and the engine completed its qualification tests in late 1978. The final production standard engine met both reliability and performance standards, though the development cost had been higher than predicted, in part due to the ambitious performance requirements. At the time of the Tornado's introduction to service, the turbine blades of the engine suffered from a shorter life span than desired, which was rectified by the implementation of design revisions upon early-production engines. Several uprated engines were developed and used on both the majority of Tornado ADVs and Germany's Tornado ECRs. The DECU (Digital Engine Control Unit) is the current engine control unit for RB199 engines superseding the analogue MECU (Main Engine Control Unit) also known as CUE.
Upgrades
Being designed for low-level operations, the Tornado required modification to perform in medium level operations that the RAF adopted in the 1990s. The RAF's GR1 fleet was extensively re-manufactured as Tornado GR4s. Upgrades on Tornado GR4s included a forward looking infrared, a wide-angle HUD (head-up display), improved cockpit displays, NVG (night vision devices) capabilities, new avionics, and a Global Positioning System (GPS) receiver. The upgrade eased the integration of new weapons and sensors which were purchased in parallel, including the Storm Shadow cruise missile, the Brimstone anti-tank missile, Paveway III laser-guided bombs and the RAPTOR reconnaissance pod. The first flight of a Tornado GR4 was on 4 April 1997. The RAF accepted its first delivery on 31 October 1997 and deliveries were completed in 2003. In 2005, the RSAF opted to have their Tornado IDSs undergo a series of upgrades to become equivalent to the RAF's GR4 configuration. On 21 December 2007 BAE signed a £210m contract for CUSP, the Capability Upgrade Strategy (Pilot). This project would see RAF GR4/4A improved in two phases, starting with the integration of the Paveway IV bomb and a communications upgrade, followed by a new tactical datalink in Phase B.
Beginning in 2000, German IDS and ECR Tornados received the ASSTA 1 (Avionics System Software Tornado in Ada) upgrade. ASSTA 1 involved a replacement weapons computer, new GPS and Laser Inertial navigation systems. The new computer allowed the integration of the HARM III, HARM 0 Block IV/V and Taurus KEPD 350 missiles, the Rafael Litening II laser designator pod and GBU-24 Paveway III laser-guided bombs. The ASSTA 2 upgrade began in 2005, primarily consisting of several new digital avionics systems and a new ECM suite; these upgrades are to be only applied to 85 Tornados (20 ECRs and 65 IDSs), as the Tornado is being replaced in part by the Eurofighter Typhoon. The ASSTA 3 upgrade programme, started in 2008, will introduce support for the laser-targeted Joint Direct Attack Munition along with further software changes.
In January 2016, Bild newspaper stated that the newest upgrade of the ASSTA suite to version 3.1, which includes colour multifunctional LCD screens in place of monochrome CRT displays, is interfering with helmet-mounted night-vision optical displays worn by pilots, rendering German Tornado bombers deployed to Syria useless for night missions. The defence ministry admitted that bright cockpit lights could be a distraction for pilots, and said that the solution will be implemented in a few weeks, but denied the need to fly night missions in Syria.
The TV TAB displays are used for route planning, the forward-looking infra-red (FLIR) sensors, targeting pods such as TIALD (Thermal Imaging and Laser Designator) and CLDP (Convertible Laser Designator Pod). The original MRCA TV TAB DU navigation display (part number V22.498.90) has a green CRT as the picture source. The original price for one CRT display version was €33,852.64. Due to the light environment, the picture tube was pushed to the limit due to the high brightness levels causing wear of the picture tube. An Active Matrix Liquid Crystal Displays (AMLCD) drop fit replacement with a digital screen TV TAB (NSN 5895-99-597-1323) was developed to replace the 'old' wear-sensitive CRT versions. The CRT versions are mainly recognisable by the two white domes at the top of the display containing the light sensors for automatic brightness regulation and the white buttons. The newer digital version is mainly recognisable by the black buttons with big white dots on them. The replacement AMLCD version has a color display instead of the original green monochrome display. A new feature is that the AMLCD has a bezel that reduces the angle of view. The main goal of the AMLCD upgrade was the intended significant reduction in life cycle costs. But it's said that the newer AMLCD version fail rather quickly due to the more sensitive and complex digital electronics compared to the much simpler design of the original CRT display. The old and newer version are a masterpiece of state of the art engineering and both are very well built. For example there's a diagnostic connector at the back panel for quick troubleshooting. The display unit is eventually a rather 'dumb' device. The original display unit is 'just' a display and a keypad. To show a picture, the separate video signal, vertical and horizontal synchronisation signals have to be fed into the display unit since there's no internal electronics for synchronisation separation of the video signal. The additional waveform generator (WFG) is needed to 'create' the desired images for use in the airplane. To power the display unit, a three phase 115 VAC 400 Hz including neutral and a 28 VDC signal have to be supplied to the display unit. The CRT version has a Low Voltage Power Supply (LVPS) for creating the needed low voltage signals. There's also a High Voltage Power Supply (HVPS) for creating the desired high voltage for the CRT picture tube. Since the newer AMLCD has no CRT picture tube, the high voltages are not needed and the mechanical and electrical design is completely different except for the connections, mounting points and functionality. The newer AMLCD version 'only' needs 28 VDC for functionality. But since a drop fit replacement is mandatory, the AMLCD version has a built in three phase 115VAC 400 Hz conversion to 28 VDC. By removing the rear three phase conversion power supply plug-in board and applying 28 VDC (<4.1 A) to the power supply board, the device can be powered for avionics enthusiast use. The AMLCD has a built in menu for selecting the airplane type: GR1, GR4 or F3, a self test and a display test like a grid pattern and color bars shown in the picture.
BAE Systems announced that, in December 2013, it had test-flown a Tornado equipped with parts made by 3D printing. The parts included a protective cover for the radio, a landing-gear guard and air-intake door support struts. The test demonstrated the feasibility of making replacement parts quickly and cheaply at the air base hosting the Tornado. The company claimed that, with some costing less than £100 to make, 3D printing of parts had saved more than £300,000 which potentially could reach more than £1.2 million by 2017.
Operational history
German Air Force (Luftwaffe)
The first Tornado prototype made its first flight on 14 August 1974 from Ingolstadt Manching Airport, in West Germany. Deliveries of production Tornados began on 27 July 1979. The total number of Tornados delivered to the German Air Force was 247, including 35 ECR variants. Originally Tornados equipped five fighter-bomber wings (Geschwader), with one tactical conversion unit and four front-line wings, replacing the Lockheed F-104 Starfighter. When one of the two Tornado wings of the German Navy was disbanded in 1994, its aircraft were used to re-equip a Luftwaffe's reconnaissance wing formerly equipped with McDonnell Douglas RF-4E Phantoms.
14 German Tornados undertook combat operations as a part of NATO's campaign during the Bosnian War. The Tornados, operating from Piacenza, Italy, flew reconnaissance missions to survey damage inflicted by previous strikes and to scout new targets. These reconnaissance missions were reportedly responsible for a significant improvement in target selection throughout the campaign.
In 1999, German Tornados participated in Operation Allied Force, NATO airstrikes against the Federal Republic of Yugoslavia during the Kosovo War. This was Germany's first offensive air mission since World War II. The ECR aircraft escorted various allies' aircraft while carrying several AGM-88 HARM missiles to counter attempted use of radar against the allied aircraft. During the Kosovo hostilities, Germany's IDS Tornados routinely conducted reconnaissance flights to identify both enemy ground forces and civilian refugees within Yugoslavia. The German Tornados flew 2108 hours and 446 sorties, firing 236 HARM missiles at hostile targets.
In June 2007, a pair of Luftwaffe Tornados flew reconnaissance missions over an anti-globalisation demonstration during the 33rd G8 summit in Heiligendamm. Following the mission, the German Defence Ministry admitted one aircraft had broken the minimum flying altitude and that mistakes were made in the handling of security of the summit.
In 2007, a detachment of six Tornados of the Aufklärungsgeschwader 51 "Immelmann" (51st reconnaissance wing) were deployed to Mazar-i-Sharif, Northern Afghanistan, to support NATO forces. The decision to send Tornados to Afghanistan was controversial: one political party launched an unsuccessful legal bid to block the deployment as unconstitutional. In support of the Afghanistan mission, improvements in the Tornado's reconnaissance equipment were accelerated; enhancing the Tornado's ability to detect hidden improvised explosive devices (IEDs). The German Tornados were withdrawn from Afghanistan in November 2010.
Defence cuts announced in March 2003 resulted in the decision to retire 90 Tornados from service with the Luftwaffe. This led to a reduction in its Tornado strength to four wings by September 2005. On 13 January 2004, the then German Defence Minister Peter Struck announced further major changes to the German armed forces. A major part of this announcement was the plan to cut the German fighter fleet from 426 in early 2004 to 265 by 2015. The German Tornado force was to be reduced to 85, with the type expected to remain in service with the Luftwaffe until 2025. The aircraft being retained have been undergoing a service life extension programme. Currently, the Luftwaffe operates Tornados with Tactical Wings Taktisches Luftwaffengeschwader 33 in Cochem/Büchel Air Base, Rhineland-Palatinate and with Taktisches Luftwaffengeschwader 51 "Immelmann" in Jagel, Schleswig-Holstein.
German Tornado aircrew training took place at Holloman Air Force Base in New Mexico, US from January 1996 at the Taktische Ausbildungskommando der Luftwaffe USA (TaktAusbKdoLw USA Tactical Training Command of the Luftwaffe USA) which was responsible for training both German F-4 Phantom and Tornado crews. In 1999 the training command was renamed as Fliegerisches Ausbildungszentrum der Luftwaffe (FlgAusbZLw Luftwaffe Training Center). In March 2015, Defence Minister Ursula von der Leyen decided to continue this training in Germany. In September 2017, flight training in Holloman for the Tornado was discontinued and transferred to Taktischen Luftwaffengeschwader 51 in Jagel with the US location command dissolved in 2019.
In April 2020, it was reported that the German defence ministry planned to replace its Tornado aircraft with a purchase of 30 Boeing F/A-18E/F Super Hornets, 15 EA-18G Growlers, and 55 Eurofighter Typhoons. The Super Hornet was selected due to its compatibility with nuclear weapons and availability of an electronic attack version. In March 2020, the Super Hornet was not certified for the B61 nuclear bombs, but Dan Gillian, head of Boeing's Super Hornet program, previously stated "We certainly think that we, working with the U.S. government, can meet the German requirements there on the [required] timeline."
In 2021, Airbus offered to replace Luftwaffe's 90 ageing Tornado Interdiction and Strike (IDS) and Electronic Combat Reconnaissance (ECR) aircraft with 85 new Eurofighter Tranche 5 standard from 2030. In 2022, the German defence ministry announced that 35 Lockheed Martin F-35 Lightning IIs will replace the Tornado fleet for nuclear sharing instead of the discussed 30 Boeing Super Hornets.
German Navy (Marineflieger)
In addition to the order made by the Luftwaffe, the German Navy's Marineflieger also received 112 of the IDS variant in the anti-shipping and marine reconnaissance roles, again replacing the Starfighter. These Tornados equipped two wings, each with a nominal strength of 48 aircraft. The principal anti-ship weapon was the AS.34 Kormoran anti-ship missile, which were initially supplemented by unguided bombs and BL755 cluster munitions, and later by AGM-88 HARM anti-radar missiles. Pods fitted with panoramic optical cameras and an infrared line scan were carried for the reconnaissance mission.
The end of the Cold War and the signing of the CFE Treaty led Germany to reduce the size of its armed forces, including the number of combat aircraft. To meet this need, one of the Marinefliegers Tornado wings was disbanded on 1 January 1994; its aircraft replaced the Phantoms of a Luftwaffe reconnaissance wing.Lake World Air Power Journal Volume 32, pp. 129, 132. The second wing was enlarged and continued in the anti-shipping, reconnaissance and anti-radar roles until it was disbanded in 2005 with its aircraft and duties passed on to the Luftwaffe.
Italian Air Force (Aeronautica Militare)
The first Italian prototype made its maiden flight on 5 December 1975 from Turin. The Aeronautica Militare received 100 Tornado IDSs (known as the A-200 in Italian service). 16 A-200s were subsequently converted to the ECR configuration; the first Italian Tornado ECR (known as the EA-200) was delivered on 27 February 1998. As a stop-gap measure for 10 years the Aeronautica Militare additionally operated 24 Tornado ADVs in the air defence role, which were leased from the RAF to cover the service gap between the retirement of the Lockheed F-104 Starfighter and the introduction of the Eurofighter Typhoon.
Italian Tornados, along with RAF Tornados, took part in the first Gulf War in 1991. Operazione Locusta saw eight Tornado IDS interdictors deployed from Gioia del Colle, Italy, to Al Dhafra, Abu Dhabi, as part of Italy's contribution to the coalition. During the conflict, one aircraft was lost to Iraqi anti-aircraft fire; the pilots ejected safely and were captured by Iraqi forces. A total of 22 Italian Tornados were deployed in the NATO-organised Operation Allied Force over Kosovo in 1999; the A-200s served in the bombing role while the EA-200s patrolled the combat region, acting to suppress enemy anti-aircraft radars, firing 115 AGM-88 HARM missiles.
In 2000, with delays to the Eurofighter, the Aeronautica Militare began a search for another interim fighter. While the Tornado was considered, any long term extension to the lease would have involved upgrade to RAF CSP standard and thus was not considered cost effective. In February 2001, Italy announced its arrangement to lease 35 F-16s from the United States under the PEACE CAESAR programme. The Aeronautica Militare returned its Tornado ADVs to the RAF, with the final aircraft arriving at RAF St Athan on 7 December 2004. One aircraft was retained for static display purposes at the Italian Air Force Museum.
In July 2002, Italy signed a contract with the Tornado Management Agency (NETMA) and Panavia for the upgrading of 18 A-200s, the first of which was received in 2003. The upgrade introduced improved navigation systems (integrated GPS and laser INS) and the ability to carry new weapons, including the Storm Shadow cruise missile, Joint Direct Attack Munition and Paveway III laser-guided bombs.
In response to anticipated violence during the 2010 Afghanistan elections, Italy, along with several other nations, increased its military commitment in Afghanistan, dispatching four A-200 Tornados to the region. Italy has opted to extend the Tornado's service life at the expense of alternative ground-attack aircraft such as the AMX International AMX; in 2010 a major upgrade and life extension programme was initiated, to provide new digital displays, Link 16 communications capability, night-vision goggles compatibility, and several other upgrades. In the long term, it is planned to replace the Tornado IDS/ECR fleet in Italian service with the Lockheed Martin F-35 Lightning II, with the final Italian Tornado scheduled to be phased out in 2025. The Aeronautica Militare received its first of an eventual 15 upgraded Tornado EA-200s on 15 June 2013.
Italian Tornado A-200 and EA-200 aircraft participated in the enforcement of a UN no-fly zone during the 2011 military intervention in Libya. Various coalition aircraft operated from bases in Italy, including RAF Tornados. Italian military aircraft delivered a combined 710 guided bombs and missiles during the strikes against Libyan targets. Of these Aeronautica Militare Tornados and AMX fighter-bombers released 550 guided bombs and missiles, and Italian Navy AV-8Bs delivered 160 guided bombs. Italian Tornados launched 20 to 30 Storm Shadow cruise missiles with the rest consisting of Paveway and JDAM guided bombs.
On 19 August 2014, two Aeronautica Militare Tornados collided in mid-air during a training mission near Ascoli. On 14 November 2014, Italy announced it was sending four Tornado aircraft with 135 support staff to Ahmad al-Jaber Air Base and to two other bases in Kuwait in participation of coalition operations against the Islamic State. The four aircraft will be used for reconnaissance missions only.Italy To Send 4 Tornados for Recon in Iraq – Defensenews.com, 14 November 2014
In October 2018, it was announced that the EA-200 Tornado had successfully completed operational testing of the AGM-88E AARGM, providing capabilities of an "expanded target set, counter-shutdown capability, advanced signals processing for improved detection and locating, geographic specificity, and a weapon impact-assessment broadcast capability."
Royal Air Force
Nicknamed the "Tonka" by the British, their first prototype (XX946) made its maiden flight on 30 October 1974 from BAC Warton. The first full production Tornado GR1 (ZA319) flew on 10 July 1979 from Warton. The first RAF Tornados (ZA320 and ZA322) were delivered to the TTTE at RAF Cottesmore on 1 July 1980. Crew that qualified from the TTTE went onto the Tornado Weapons Conversion Unit (TWCU), which formed on 1 August 1981 at RAF Honington, before being posted to a front-line squadron. No. IX (B) Squadron became the first front-line squadron in the world to operate the Tornado when it reformed on 1 June 1982, having received its first Tornado GR1 ZA586 on 6 January 1982.Napier 2017, p. 20. No. IX (B) Squadron was declared strike combat ready to the Supreme Allied Commander Europe (SACEUR) in January 1983. Two more squadrons were formed at RAF Marham in 1983 – No. 617 Squadron on 1 January and No. 27 Squadron on 12 August. The first RAF Tornado GR1 loss was on 27 September 1983 when ZA586 suffered complete electrical failure and crashed. Navigator Flt. Lt. Nigel Nickles ejected but the pilot Sqn. Ldr. Michael Stephens died in the crash after ordering ejection. In January 1984, the TWCU became No. 45 (Reserve) Squadron.
RAF Germany (RAFG) began receiving Tornados after the formation of No. XV (Designate) Squadron on 1 September 1983 at RAF Laarbruch followed by No. 16 (Designate) Squadron in January 1984 (who were both Blackburn Buccaneer squadrons). They were then joined by No. 20 (Designate) Squadron in May 1984 (who were operating the SEPECAT Jaguar GR1 from RAF Brüggen). Unlike the Tornado squadrons based in the UK which were under control of the British military, those stationed in RAFG were under the control of SACEUR, with the aircraft on Quick Reaction Alert (Nuclear), "QRA (N)", being equipped with the WE.177 nuclear bomb. In the event of the Cold War going 'hot', the majority of RAFG Tornado squadrons were tasked with destroying Warsaw Pact airfields and surface-to-air missile (SAM) sites in East Germany. While No. 20 Squadron was given a separate responsibility of destroying bridges over the rivers Elbe and Weser to prevent Warsaw Pact forces from advancing. By early 1985, Nos. XV, 16 and 20 Squadrons at RAF Laarbruch had been declared strike combat ready to SACEUR.
Tornados began to arrive at RAF Brüggen in September 1984 with the formation of No. 31 (Designate) Squadron. No. 17 (Designate) Squadron was formed in December 1984, with the two Brüggen squadrons joined by No. 14 (Designate) Squadron in mid-1985. No. IX (B) Squadron relocated from RAF Honington to RAF Brüggen on 1 October 1986, arriving in a diamond nine formation. The outcome of the Reykjavík Summit in October 1986 between Ronald Reagan and Mikhail Gorbachev led the end of QRA (Nuclear) for the Tornado force. By the end of 1986, the Tornado GR1 fleet had been equipped with a Laser Ranger and Marked Target Seeker (LRMTS) under the nose, and had begun to be equipped with the BOZ-107 chaff and flare dispenser.
The Tornado made its combat debut as part of Operation Granby, the British contribution to the Gulf War in 1991. This saw 49 RAF Tornado GR1s deploy to Muharraq Airfield in Bahrain and to Tabuk Air Base and Dhahran Airfield in Saudi Arabia. 18 Tornado F3s were deployed to provide air cover, the threat of their long range missiles being a deterrent to Iraqi pilots, who would avoid combat when approached. Early on in the conflict, the GR1s targeted military airfields across Iraq, deploying a mixture of unguided bombs in loft-bombing attacks and specialised JP233 runway denial weapons. On 17 January 1991, the first Tornado to be lost was shot down by an Iraqi SA-16 missile following a failed low-level bombing run. On 19 January, another RAF Tornado was shot down during an intensive raid on Tallil Air Base. The impact of the Tornado strikes upon Iraqi airfields is difficult to determine.Clark 1993, p. 30. A total of six RAF Tornados were lost in the conflict, four while delivering unguided bombs, one after delivering JP233, and one trying to deliver laser-guided bombs.
The UK sent a detachment of Blackburn Buccaneer aircraft equipped with Westinghouse Electric Corporation Pave Spike laser designators, allowing Tornado GR1s to drop precision guided weapons guided by the Buccaneers. A planned programme to fit GR1s with the GEC-Marconi TIALD laser designation system was rapidly accelerated to give the Tornado force the ability to self-designate targets. Author Claus-Christian Szejnmann declared that the TIALD pod enabled the GR1 to "achieve probably the most accurate bombing in the RAF's history".Szejnmann 2009, p. 217. Although laser designation proved effective in the Gulf War, only 23 TIALD pods had been purchased by 2000; shortages hindered combat operations over Kosovo.
After the war's opening phase, the GR1s switched to medium-level strike missions; typical targets included munition depots and oil refineries. Only the reconnaissance Tornado GR1As continued flying the low-altitude high-speed profile, emerging unscathed despite the inherent danger in conducting pre-attack reconnaissance. After the conflict, Britain maintained a military presence in the Gulf. Around six GR1s were based at Ali Al Salem airbase in Kuwait, contributing the southern no-fly zone as part of Operation Southern Watch. Six additional GR1s participated in Operation Provide Comfort over Northern Iraq.
The upgraded Tornado GR4 made its operational debut in Operation Southern Watch; patrolling Iraq's southern airspace from bases in Kuwait. Both Tornado GR1s and GR4s based at Ali Al Salem, Kuwait, took part in coalition strikes at Iraq's military infrastructure during Operation Desert Fox in 1998. In December 1998, an Iraqi anti-aircraft battery fired six to eight missiles at a patrolling Tornado. The battery was later attacked in retaliation, and no aircraft were lost during the incident. It was reported that during Desert Fox RAF Tornados had successfully destroyed 75% of their targets, and out of the 36 missions planned, 28 had been successfully completed.
The GR1 participated in the Kosovo War in 1999. Tornados initially operated from RAF Brüggen, Germany and later from Solenzara Air Base, Corsica. Experiences from Kosovo led to the RAF procuring AGM-65 Maverick missiles and Enhanced Paveway smart bombs for the Tornado. Following the Kosovo War, the GR1 was phased out as aircraft were upgraded to GR4 standard; the final upgrade was returned to the RAF on 10 June 2003.
The GR4 was used in Operation Telic, Britain's contribution to the 2003 invasion of Iraq. RAF Tornados flew alongside American aircraft in the opening phase of the war, striking Iraqi targets. Aiming to minimise civilian casualties, Tornados deployed the Storm Shadow cruise missile for the first time. Whilst 25% of the UK's air-launched weapons in Kosovo were precision-guided, four years later in Iraq this ratio increased to 85%.
On 23 March 2003, a Tornado GR4 was shot down over Iraq by friendly fire from a US Patriot missile battery, killing both crew members."RAF Tornado Downed by US Missile." BBC News, 23 March 2003. In July 2003, a US board of inquiry exonerated the battery's operators, observing the Tornado's "lack of functioning IFF (Identification Friend or Foe)" as a factor in the incident. Problems with Patriot were also suggested as a factor, multiple incidents of mis-identification of friendly aircraft have occurred, including the fatal shootdown of a US Navy F/A-18 a few weeks after the Tornado's loss.Leung, Rebecca. "The Patriot Flawed." CBS News, 5 December 2007. Britain withdrew the last of its Tornados from Iraq in June 2009.
In early 2009, several GR4s arrived at Kandahar Airfield, Afghanistan to replace the British Aerospace Harrier GR7/9 aircraft which had been deployed there since November 2004. In 2009, Paveway IV guided bombs were brought into service on the RAF's Tornados, having been previously used in Afghanistan by the Harrier II. In Summer 2010, extra Tornados were dispatched to Kandahar for the duration of the 2010 Afghan election. British Tornados ended operations in Afghanistan in November 2014, having flown over 5,000 pairs sorties over 33,500 hours, including 600 "shows of force" to deter Taliban attacks. During more than 70 engagements, 140 Brimstone missiles and Paveway IV bombs were deployed, and over 3,000 27 mm cannon shells fired.
Prior to the 2010 Strategic Defence and Security Review (SDSR)'s publication, the Tornado's retirement was under consideration with savings of £7.5 billion anticipated. The SDSR announced the Tornado would be retained at the expense of the Harrier II, although numbers would decline in the transition to the Eurofighter Typhoon and the F-35 Lightning II.Hoyle, Craig. "UK confirms two Tornado GR4 squadrons will go by June." Flight International, 1 March 2011. By July 2013, 59 RAF GR4s were receiving the CUSP avionics upgrade, which achieved Initial Service Date (ISD) in March 2013.
On 18 March 2011, British Prime Minister David Cameron announced that Tornados and Typhoons would enforce a no-fly zone in Libya. In March 2011, several Tornados flew strike missions against targets inside Libya in what were, according to Defence Secretary Liam Fox, "the longest range bombing mission conducted by the RAF since the Falklands conflict". A variety of munitions were used during Tornado operations over Libya, including laser-guided bombs and Brimstone missiles.
In August 2014, Tornado GR4s were deployed to RAF Akrotiri, Cyprus to support refugees sheltering from Islamic State militants in the Mount Sinjar region of Iraq. The decision came three days after the United States began conducting air attacks against the Islamic State. Tornados were pre-positioned to gather situational awareness in the region. On 27 September 2014, after Parliament approved airstrikes against Islamic State forces inside Iraq, two Tornados conducted their first armed reconnaissance mission in conjunction with coalition aircraft. The next day, two Tornados made the first airstrike on a heavy weapons post and an armoured vehicle, supporting Kurdish forces in northwest Iraq.
By 1 March 2015, eight RAF Tornados had been deployed to Akrotiri and conducted 159 airstrikes against IS targets in Iraq. On 2 December 2015, Parliament approved air strikes in Syria as well as Iraq to combat the growing threat of ISIL; Tornados began bombing that evening. On 14 April 2018, four Tornado GR4s from RAF Akrotiri struck a Syrian military facility with Storm Shadow cruise missiles in response to a suspected chemical attack on Douma by the Syrian regime the previous week.
On 10 July 2018, nine Tornado GR4s from RAF Marham flew over London to celebrate 100 years of the RAF. During late 2018, the RAF commemorated the Tornado's service with three special schemes: ZG752 paid homage to its early years with a green/grey wraparound camouflage; ZG775 and ZD716 both wore schemes commemorating the final units to operate the type – No. IX (B) Squadron and No. 31 Squadron respectively. On 31 January 2019, the Tornado GR4 flew its last operational sorties in Operation Shader. The eight Tornados formerly stationed at RAF Akrotiri returned to RAF Marham in early February 2019, their duties assumed by six Typhoons. Between September 2014 and January 2019, RAF Tornados accounted for 31% of the estimated 4,315 casualties inflicted upon ISIL by the RAF during the operation.
To celebrate 40 years of service and to mark the type's retirement, several flypasts were carried out on 19, 20 and 21 February 2019 over locations such as BAE Warton, RAF Honington and RAF Lossiemouth. On 28 February, nine Tornados flew out of RAF Marham for a diamond nine formation flypast over a graduation parade at RAF Cranwell before returning and carrying out a series of passes over RAF Marham. On 14 March 2019 the final flight of an RAF Tornado was carried out by Tornado GR4 ZA463, the oldest remaining Tornado, over RAF Marham during the disbandment parade of No. IX (B) Squadron and No. 31 Squadron. The Tornado GR4 was officially retired from RAF service on 1 April 2019, the 101st anniversary of the force. Post-retirement, five Tornados returned to RAF Honington via road for the Complex Air Ground Environment (CAGE), which simulates a Tornado flight line for training purposes.
On 2 July 2023, it was reported that pylons from decommissioned RAF Panavia Tornado GR4s had been fitted to Ukrainian Su-24s, so that they could launch the Storm Shadow missile. These Su-24s can carry at least two Storm Shadow missiles at a time. Unlike missiles carried by the Tornado, it was reported, missiles carried by the Su-24 required the coordinates of targets to be entered before takeoff, while the aircraft was on the ground.
Royal Saudi Air Force
In 1984, Royal Saudi Air Force pilots visited RAF Honington to fly and evaluate the Tornado GR1. On 25 September 1985, the UK and Saudi Arabia signed the Al Yamamah I contract which included the sale of 48 IDS and 24 ADV Tornadoes. In October 1985, four RSAF crews joined the Tri-National Tornado Training Establishment at RAF Cottesmore. The first flight of a Saudi Tornado IDS (701) was on 17 February 1986, and the first four IDS were delivered to King Abdul Aziz Air Base, Dharan in 26 March 1986. The first Saudi Tornado squadron formed was No. 7 Squadron, which had received 20 aircraft by 8 October 1987. The first Saudi ADV (2905) flew on 1 December 1988, and the first deliveries arrived in Saudi Arabia on 20 March 1989. The first two RSAF ADV squadrons, Nos. 29 and 34 Squadrons, were formed and reached their full strength of 12 aircraft each by 1990.
In the run-up to the Gulf War, the RSAF began to pool its Tornado squadrons together, with the joint 24 aircraft strong ADV unit flying missions as part of Operation Desert Shield. Saudi Tornados took part in the Gulf War, with No. 7 Squadron carrying out their first mission on the night of 17 January 1991. In total, the RSAF flew 665 Tornado IDS sorties and 451 ADV sorties, seeing the loss of one IDS (765) on the night of 19/20 January. In June 1993, the Al Yamamah II contract was signed, the main element of which was 48 additional IDSs.Koch and Long 2003, pp. 81–82.
Following experience with both the Tornado and the McDonnell Douglas F-15E Strike Eagle, the RSAF discontinued low-level mission training in the F-15E in light of the Tornado's superior low-altitude flight performance. Ten of the Saudi Tornados were fitted with equipment for performing reconnaissance missions. The 22 Tornado ADVs were replaced by the Eurofighter Typhoon; the retired aircraft were purchased back by the UK.
By 2007, both the Sea Eagle anti-ship missile and the ALARM anti-radiation missile that previously equipped the RSAF's Tornados had been withdrawn from service. As of 2010, Saudi Arabia has signed several contracts for new weapon systems to be fitted to their Tornado and Typhoon fleets, such as the short range air-to-air IRIS-T missile, and the Brimstone and Storm Shadow missiles.
In September 2006, the Saudi government signed a contract worth £2.5 billion (US$4.7 billion) with BAE Systems to upgrade up to 80 RSAF Tornado IDS aircraft to keep them in service until 2020. The first RSAF Tornado was returned to BAE Systems Warton in December 2006 for upgrade under the "Tornado Sustainment Programme" (TSP) to "equip the IDS fleet with a range of new precision-guided weapons and enhanced targeting equipment, in many cases common with those systems already fielded by the UK's Tornado GR4s." In December 2007, the first RSAF aircraft to complete modernisation was returned to Saudi Arabia.
Starting from the first week of November 2009, RSAF Tornados, along with Saudi F-15s performed air raids during the Shia insurgency in north Yemen. It was the first time since Operation Desert Storm in 1991 that the RSAF had participated in a military operation over hostile territory. RSAF Tornados are playing a central role in Saudi-led bombing campaign in Yemen.
On 7 January 2018, Houthi fighters claimed to have shot down a Saudi warplane which was conducting air-raids over northern Yemen. According to Saudi reports, the downed aircraft was an RSAF Tornado which was on a combat mission over Saada province in northern Yemen, it was lost for 'technical reasons' and both crew were rescued.
On 12 July 2018, another RSAF Tornado crashed in Asir region after returning from Saada, Yemen due to a technical malfunction. On 14 February 2020, a Saudi Tornado was shot down during close air support mission in support of Saudi allied Yemeni forces in the Yemeni Al Jouf governorate by Houthis. On the day after, the Saudi command confirmed the loss of a Tornado, while a video was released showing the downing using a two-stage surface to air missile. Both pilots ejected and were captured by Houthis.
Variants
Tornado IDS
Tornado GR1
RAF IDS (interdictor/strike) variants were initially designated the Tornado GR1 with later modified aircraft designated Tornado GR1A, Tornado GR1B, Tornado GR4 and Tornado GR4A. The first of 228 GR1s was delivered on 5 June 1979, and the type entered service in the early 1980s.
Tornado GR1B
The Tornado GR1B was a specialised anti-shipping variant of the GR1, replacing the Blackburn Buccaneer. 26 aircraft were converted and were based at RAF Lossiemouth, Scotland. Each aircraft was equipped to carry up to four Sea Eagle anti-ship missiles. At first the GR1B lacked the radar capability to track shipping, instead relying on the missile's seeker for target acquisition; later updates allowed target data to be passed from aircraft to missile.
Tornado GR1P
A single Tornado GR1 (ZA326, the eighth production aircraft) was re-designated GR1P after being partially rebuilt using parts from different production batches following a fire during engine testing. This aircraft served with the Royal Aircraft Establishment and the Empire Test Pilot's School until 2005, when it was retired, being the last GR1 in service anywhere in the world.
Tornado GR4
The UK Ministry of Defence began studies for a GR1 Mid-Life Update (MLU) in 1984. The update to GR4 standard, approved in 1994, would improve capability in the medium-altitude role based on lessons learned from the GR1's performance in the 1991 Gulf War. British Aerospace (later BAE Systems) upgraded 142 Tornado GR1s to GR4 standard, beginning in 1996 and finished in 2003. 59 RAF aircraft later received the CUSP avionics package which integrated the Paveway IV bomb and installed a new secure communications module from Cassidian in Phase A, followed by the Tactical Information Exchange (TIE) datalink from General Dynamics in Phase B.
Tornado GR1A/GR4A
The GR1A is the reconnaissance variant operated by the RAF and RSAF, fitted with the TIRRS (Tornado Infra-Red Reconnaissance System), replacing the cannon. The RAF ordered 30 GR1As, 14 as GR1 rebuilds and 16 new aircraft. When the Tornado GR1s were upgraded to become GR4s, GR1A aircraft were upgraded to GR4A standard. The switch from low-level operations to medium/high-level operations means that the internal TIRRS was no longer used. As the GR4A's internal sensors are no longer essential, the RAF's Tactical Reconnaissance Wing operated both GR4A and GR4 aircraft.
Tornado ECR
Operated by Germany and Italy, the ECR (Electronic Combat / Reconnaissance) is a Tornado variant devoted to Suppression of Enemy Air Defenses (SEAD) missions. It was first delivered on 21 May 1990. The ECR has sensors to detect radar usage and is equipped with anti-radiation AGM-88 HARM missiles. The Luftwaffe's 35 ECRs were delivered new, while Italy received 16 converted IDSs. Italian Tornado ECRs differ from the Luftwaffe aircraft as they lack built-in reconnaissance capability and use RecceLite reconnaissance pods. Only Luftwaffe ECRs are equipped with the RB199 Mk.105 engine, which has a higher thrust rating. The German ECRs do not carry a cannon. The RAF used the IDS version in the SEAD role instead of the ECR and also modified several of its Tornado F.3s to undertake the mission.
Tornado ADV
The Tornado ADV (air defence variant) was an interceptor variant of the Tornado, developed for the RAF (designated Tornado F2 or F3) and also operated by Saudi Arabia and Italy. The ADV had inferior agility to fighters like the McDonnell Douglas F-15 Eagle, but was not intended as a dogfighter, rather as a long-endurance interceptor to counter the threat from Cold War bombers. Although the ADV had 80% parts commonality with the Tornado IDS, the ADV had greater acceleration, improved RB199 Mk.104 engines, a stretched body, greater fuel capacity, the AI.24 Foxhunter radar, and software changes. It had only one cannon to accommodate a retractable inflight refuelling probe."Turbo-Union: The Power for Peace and Freedom." turbounion.co.uk. Retrieved: 29 November 2011.
Operators
Luftwaffe: 210 IDS and 35 ECR Tornados delivered. By December 2018, 94 IDS and 28 ECR aircraft remained in service.
Marineflieger: 112 IDS Tornados delivered, retired in June 2005 with some aircraft being reallocated to the Luftwaffe.
Aeronautica Militare: 100 IDS A-200 Tornados delivered (18 converted to ECR EA-200s), 24 ADV F3 aircraft later leased from the RAF between 1995 and 2004. By December 2018, 70 A-200 and 5 EA-200 aircraft remained in service.
Royal Saudi Air Force: 96 IDS and 24 ADV Tornados delivered, ADVs retired in 2006. By December 2018, 81 IDS aircraft remained in service.
Former operators
Royal Air Force''': 385 IDS GR1 and ADV F2/F3 Tornados delivered, including 230 GR1s (142 later upgraded to GR4s), 18 F2s and 147 F3s (retired in 2011). GR4 was retired on 1 April 2019.
Aircraft on display
Australia
ZG791 Tornado GR4 on display at Aviation Heritage Museum, Bull Creek, Western Australia.
Austria
44+66 Tornado IDS on display at Groß-Siegharts, Lower Austria.
Bulgaria
44+13 Tornado IDS on display at the National Museum of Military History, Sofia.
Estonia
ZE256 Tornado F3 on display at the Estonian Aviation Museum, Lange.
Germany
D-9591 Tornado Prototype P.01 on display at Militärhistorisches Museum Flugplatz Berlin-Gatow.
XX948 Tornado Prototype P.06 on display at Hermeskeil.
43+55 Tornado IDS on display at Aeronauticum, Nordholz.
43+70 Tornado IDS on display at Büchel Air Base, Cochem.
43+86 Tornado IDS (MTU corporate design paint scheme) at MTU Aero Engines, Munich.
43+96 Tornado IDS on display at Wengerohr, Wittlich.
44+31 Tornado IDS (Blue Lightning paint scheme) of the 31st Fighter Bomber Wing "Boelcke" at Nörvenich AB.
44+35 Tornado IDS on display at the Cologne Bonn Airport, Cologne.
44+56 Tornado IDS on display at Fliegergeschichtliche Museum TG JaboG 34, Memmingen.
44+68 Tornado IDS on display at the Militärhistorisches Museum Flugplatz Berlin-Gatow.
44+84 Tornado IDS on display at Fürstenfeldbruck Air Base, Fürstenfeldbruck.
44+96 Tornado IDS gate guard at Schleswig Air Base in Jagel, near Schleswig.
44+97 Tornado IDS of the Einsatzgeschwader (Expeditionary Air Wing) Mazar-i-Sharif at the Deutsches Museum Flugwerft Schleissheim, Oberschleißheim.
45+30 Tornado IDS on display at Aeronauticum, Nordholz.
45+44 Tornado IDS gate guard at Büchel Air Base, Cochem.
Italy
MM7001 Pre-production Tornado P.14 on display at Cameri Air Base, Cameri.
MM7046 Tornado A-200 gate guard at Ghedi Air Base, Brescia.
MM7080 Tornado A-200 gate guard at Aviano Air Base, Pordenone.
MM7210 (ex-ZE836) Tornado F3 on display at the Italian Air Force Museum, Vigna di Valle.
Netherlands
XX947 Tornado Prototype P.03 on display at PS Aero, Baarlo, painted as 98+08 of the German Air Force.
Saudi Arabia
765 Tornado IDS on display at King Abdul-Aziz Air Base, Dhahran.
2915 Tornado ADV on display at the Royal Saudi Air Force Museum in Riyadh.
United Kingdom
XX946 Tornado Prototype P.02 on display at the RAF Museum Cosford, England.
XZ630 Pre-production Tornado P.12 on display as a GR4 on the parade ground at RAF Halton, Buckinghamshire, England.
XZ631 Tornado GR4 Prototype P.15 on display at Yorkshire Air Museum, Elvington, England.
ZA267 Tornado F2T on display at RAF Syerston, Nottinghamshire, England.
ZA319 Tornado GR1T on display at the Boscombe Down Aviation Collection, Wiltshire, England.
ZA326 Tornado GR1P on display at South Wales Aviation Museum, Vale of Glamorgan, Wales.
ZA354 Tornado GR1 on display at Yorkshire Air Museum, Elvington, England.
ZA357 Tornado GR1 on display at RAF Syerston, Nottinghamshire, England.
ZA398 Tornado GR4A on display at Cornwall Aviation Heritage Centre, Cornwall, England
ZA399 Tornado GR1 on display in Knutsford, Cheshire, England.
ZA452 Tornado GR4 on display at Midland Air Museum, Coventry, England.
ZA457 Tornado GR1B on display at Royal Air Force Museum London, Hendon, England.
ZA463 Tornado GR4 on the gate at RAF Lossiemouth, Scotland
ZA465 Tornado GR1 on display at Imperial War Museum, Duxford, England.
ZA469 Tornado GR4 on display at Imperial War Museum, Duxford, England.
ZA556 Tornado GR4 on display at the Defence Academy of the United Kingdom, Shrivenham, England.
ZA607 Tornado GR4 on the gate at MoD Sealand, Wales.
ZA614 Tornado GR4 on the gate at RAF Marham, Norfolk, England.
ZD744 Tornado GR4 on display at Montrose Air Station Heritage Centre, Angus, Scotland.
ZE204 Tornado F3 on display at the North East Land, Sea and Air Museums, Tyne and Wear, England.
ZE760 Tornado F3 on the gate at RAF Coningsby, Lincolnshire, England.
ZE887 Tornado F3 on display at Royal Air Force Museum London, Hendon, England.
ZE934 Tornado F3 on display at National Museum of Flight, East Fortune, Scotland.
ZE966 Tornado F3 on display at Tornado Heritage Centre, Hawarden Airport, Wales.
ZE967 Tornado F3 on the gate at Leuchars Station, Fife, Scotland.
ZG771 Tornado GR4 on display at Ulster Aviation Society, Lisburn, Northern Ireland.
ZH552 Tornado F3 on display at RAF Leeming, North Yorkshire, England.
United States
ZA374 Tornado GR1 on display at the National Museum of the United States Air Force, Wright Patterson AFB, Ohio.
43+74 Tornado IDS of the German Navy, Marinefliegergeschwader 1 at the Pima Air & Space Museum, Tucson, Arizona.
43+75 Tornado IDS on display at Holloman Air Force Base, New Mexico.
45+11 Tornado IDS on display at the New Mexico Museum of Space History, New Mexico.
Specifications (Tornado GR4)
Popular culture
| Technology | Specific aircraft | null |
168701 | https://en.wikipedia.org/wiki/Open%20Database%20Connectivity | Open Database Connectivity | In computing, Open Database Connectivity (ODBC) is a standard application programming interface (API) for accessing database management systems (DBMS). The designers of ODBC aimed to make it independent of database systems and operating systems. An application written using ODBC can be ported to other platforms, both on the client and server side, with few changes to the data access code.
ODBC accomplishes DBMS independence by using an ODBC driver as a translation layer between the application and the DBMS. The application uses ODBC functions through an ODBC driver manager with which it is linked, and the driver passes the query to the DBMS. An ODBC driver can be thought of as analogous to a printer driver or other driver, providing a standard set of functions for the application to use, and implementing DBMS-specific functionality. An application that can use ODBC is referred to as "ODBC-compliant". Any ODBC-compliant application can access any DBMS for which a driver is installed. Drivers exist for all major DBMSs, many other data sources like address book systems and Microsoft Excel, and even for text or comma-separated values (CSV) files.
ODBC was originally developed by Microsoft and Simba Technologies during the early 1990s, and became the basis for the Call Level Interface (CLI) standardized by SQL Access Group in the Unix and mainframe field. ODBC retained several features that were removed as part of the CLI effort. Full ODBC was later ported back to those platforms, and became a de facto standard considerably better known than CLI. The CLI remains similar to ODBC, and applications can be ported from one platform to the other with few changes.
History
Before ODBC
The introduction of the mainframe-based relational database during the 1970s led to a proliferation of data access methods. Generally these systems operated together with a simple command processor that allowed users to type in English-like commands, and receive output. The best-known examples are SQL from IBM and QUEL from the Ingres project. These systems may or may not allow other applications to access the data directly, and those that did use a wide variety of methodologies. The introduction of SQL aimed to solve the problem of language standardization, although substantial differences in implementation remained.
Since the SQL language had only rudimentary programming features, users often wanted to use SQL within a program written in another language, say Fortran or C. This led to the concept of Embedded SQL, which allowed SQL code to be embedded within another language. For instance, a SQL statement like SELECT * FROM city could be inserted as text within C source code, and during compiling it would be converted into a custom format that directly called a function within a library that would pass the statement into the SQL system. Results returned from the statements would be interpreted back into C data formats like char * using similar library code.
There were several problems with the Embedded SQL approach. Like the different varieties of SQL, the Embedded SQLs that used them varied widely, not only from platform to platform, but even across languages on one platform – a system that allowed calls into IBM Db2 would look very different from one that called into their own SQL/DS. Another key problem to the Embedded SQL concept was that the SQL code could only be changed in the program's source code, so that even small changes to the query required considerable programmer effort to modify. The SQL market referred to this as static SQL, versus dynamic SQL which could be changed at any time, like the command-line interfaces that shipped with almost all SQL systems, or a programming interface that left the SQL as plain text until it was called. Dynamic SQL systems became a major focus for SQL vendors during the 1980s.
Older mainframe databases, and the newer microcomputer based systems that were based on them, generally did not have a SQL-like command processor between the user and the database engine. Instead, the data was accessed directly by the program – a programming library in the case of large mainframe systems, or a command line interface or interactive forms system in the case of dBASE and similar applications. Data from dBASE could not generally be accessed directly by other programs running on the machine. Those programs may be given a way to access this data, often through libraries, but it would not work with any other database engine, or even different databases in the same engine. In effect, all such systems were static, which presented considerable problems.
Early efforts
By the mid-1980s the rapid improvement in microcomputers, and especially the introduction of the graphical user interface and data-rich application programs like Lotus 1-2-3 led to an increasing interest in using personal computers as the client-side platform of choice in client–server computing. Under this model, large mainframes and minicomputers would be used primarily to serve up data over local area networks to microcomputers that would interpret, display and manipulate that data. For this model to work, a data access standard was a requirement – in the mainframe field it was highly likely that all of the computers in a shop were from one vendor and clients were computer terminals talking directly to them, but in the micro field there was no such standardization and any client might access any server using any networking system.
By the late 1980s there were several efforts underway to provide an abstraction layer for this purpose. Some of these were mainframe related, designed to allow programs running on those machines to translate between the variety of SQL's and provide a single common interface which could then be called by other mainframe or microcomputer programs. These solutions included IBM's Distributed Relational Database Architecture (DRDA) and Apple Computer's Data Access Language. Much more common, however, were systems that ran entirely on microcomputers, including a complete protocol stack that included any required networking or file translation support.
One of the early examples of such a system was Lotus Development's DataLens, initially known as Blueprint. Blueprint, developed for 1-2-3, supported a variety of data sources, including SQL/DS, DB2, FOCUS and a variety of similar mainframe systems, as well as microcomputer systems like dBase and the early Microsoft/Ashton-Tate efforts that would eventually develop into Microsoft SQL Server. Unlike the later ODBC, Blueprint was a purely code-based system, lacking anything approximating a command language like SQL. Instead, programmers used data structures to store the query information, constructing a query by linking many of these structures together. Lotus referred to these compound structures as query trees.
Around the same time, an industry team including members from Sybase (Tom Haggin), Tandem Computers (Jim Gray & Rao Yendluri) and Microsoft (Kyle Geiger) were working on a standardized dynamic SQL concept. Much of the system was based on Sybase's DB-Library system, with the Sybase-specific sections removed and several additions to support other platforms. DB-Library was aided by an industry-wide move from library systems that were tightly linked to a specific language, to library systems that were provided by the operating system and required the languages on that platform to conform to its standards. This meant that a single library could be used with (potentially) any programming language on a given platform.
The first draft of the Microsoft Data Access API was published in April 1989, about the same time as Lotus' announcement of Blueprint. In spite of Blueprint's great lead – it was running when MSDA was still a paper project – Lotus eventually joined the MSDA efforts as it became clear that SQL would become the de facto database standard. After considerable industry input, in the summer of 1989 the standard became SQL Connectivity (SQLC).
SAG and CLI
In 1988 several vendors, mostly from the Unix and database communities, formed the SQL Access Group (SAG) in an effort to produce a single basic standard for the SQL language. At the first meeting there was considerable debate over whether or not the effort should work solely on the SQL language itself, or attempt a wider standardization which included a dynamic SQL language-embedding system as well, what they called a Call Level Interface (CLI). While attending the meeting with an early draft of what was then still known as MS Data Access, Kyle Geiger of Microsoft invited Jeff Balboni and Larry Barnes of Digital Equipment Corporation (DEC) to join the SQLC meetings as well. SQLC was a potential solution to the call for the CLI, which was being led by DEC.
The new SQLC "gang of four", MS, Tandem, DEC and Sybase, brought an updated version of SQLC to the next SAG meeting in June 1990. The SAG responded by opening the standard effort to any competing design, but of the many proposals, only Oracle Corp had a system that presented serious competition. In the end, SQLC won the votes and became the draft standard, but only after large portions of the API were removed – the standards document was trimmed from 120 pages to 50 during this time. It was also during this period that the name Call Level Interface was formally adopted. In 1995 SQL/CLI became part of the international SQL standard, ISO/IEC 9075-3. The SAG itself was taken over by the X/Open group in 1996, and, over time, became part of The Open Group's Common Application Environment.
MS continued working with the original SQLC standard, retaining many of the advanced features that were removed from the CLI version. These included features like scrollable cursors, and metadata information queries. The commands in the API were split into groups; the Core group was identical to the CLI, the Level 1 extensions were commands that would be easy to implement in drivers, while Level 2 commands contained the more advanced features like cursors. A proposed standard was released in December 1991, and industry input was gathered and worked into the system through 1992, resulting in yet another name change to ODBC.
JET and ODBC
During this time, Microsoft was in the midst of developing their Jet database system. Jet combined three primary subsystems; an ISAM-based database engine (also named Jet, confusingly), a C-based interface allowing applications to access that data, and a selection of driver dynamic-link libraries (DLL) that allowed the same C interface to redirect input and output to other ISAM-based databases, like Paradox and xBase. Jet allowed using one set of calls to access common microcomputer databases in a fashion similar to Blueprint, by then renamed DataLens. However, Jet did not use SQL; like DataLens, the interface was in C and consisted of data structures and function calls.
The SAG standardization efforts presented an opportunity for Microsoft to adapt their Jet system to the new CLI standard. This would not only make Windows a premier platform for CLI development, but also allow users to use SQL to access both Jet and other databases as well. What was missing was the SQL parser that could convert those calls from their text form into the C-interface used in Jet. To solve this, MS partnered with PageAhead Software to use their existing query processor, SIMBA. SIMBA was used as a parser above Jet's C library, turning Jet into an SQL database. And because Jet could forward those C-based calls to other databases, this also allowed SIMBA to query other systems. Microsoft included drivers for Excel to turn its spreadsheet documents into SQL-accessible database tables.
Release and continued development
ODBC 1.0 was released in September 1992. At the time, there was little direct support for SQL databases (versus ISAM), and early drivers were noted for poor performance. Some of this was unavoidable due to the path that the calls took through the Jet-based stack; ODBC calls to SQL databases were first converted from Simba Technologies's SQL dialect to Jet's internal C-based format, then passed to a driver for conversion back into SQL calls for the database. Digital Equipment and Oracle both contracted Simba Technologies to develop drivers for their databases as well.
Circa 1993, OpenLink Software shipped one of the first independently developed third-party ODBC drivers, for the PROGRESS DBMS, and soon followed with their UDBC (a cross-platform API equivalent of ODBC and the SAG/CLI) SDK and associated drivers for PROGRESS, Sybase, Oracle, and other DBMS, for use on Unix-like OS (AIX, HP-UX, Solaris, Linux, etc.), VMS, Windows NT, OS/2, and other OS.
Meanwhile, the CLI standard effort dragged on, and it was not until March 1995 that the definitive version was finalized. By then, Microsoft had already granted Visigenic Software a source code license to develop ODBC on non-Windows platforms. Visigenic ported ODBC to the classic Mac OS, and a wide variety of Unix platforms, where ODBC quickly became the de facto standard. "Real" CLI is rare today. The two systems remain similar, and many applications can be ported from ODBC to CLI with few or no changes.
Over time, database vendors took over the driver interfaces and provided direct links to their products. Skipping the intermediate conversions to and from Jet or similar wrappers often resulted in higher performance. However, by then Microsoft had changed focus to their OLE DB concept (recently reinstated ), which provided direct access to a wider variety of data sources from address books to text files. Several new systems followed which further turned their attention from ODBC, including ActiveX Data Objects (ADO) and ADO.net, which interacted more or less with ODBC over their lifetimes.
As Microsoft turned its attention away from working directly on ODBC, the Unix field was increasingly embracing it. This was propelled by two changes within the market, the introduction of graphical user interfaces (GUIs) like GNOME that provided a need to access these sources in non-text form, and the emergence of open software database systems like PostgreSQL and MySQL, initially under Unix. The later adoption of ODBC by Apple for using the standard Unix-side iODBC package Mac OS X 10.2 (Jaguar) (which OpenLink Software had been independently providing for Mac OS X 10.0 and even Mac OS 9 since 2001) further cemented ODBC as the standard for cross-platform data access.
Sun Microsystems used the ODBC system as the basis for their own open standard, Java Database Connectivity (JDBC). In most ways, JDBC can be considered a version of ODBC for the programming language Java instead of C. JDBC-to-ODBC bridges allow Java-based programs to access data sources through ODBC drivers on platforms lacking a native JDBC driver, although these are now relatively rare. Inversely, ODBC-to-JDBC bridges allow C-based programs to access data sources through JDBC drivers on platforms or from databases lacking suitable ODBC drivers.
ODBC today
ODBC remains in wide use today, with drivers available for most platforms and most databases. It is not uncommon to find ODBC drivers for database engines that are meant to be embedded, like SQLite, as a way to allow existing tools to act as front-ends to these engines for testing and debugging.
Version history
ODBC specifications
1.0: released in September 1992
2.0: 1994
2.5
3.0: 1995, John Goodson of Intersolv and Frank Pellow and Paul Cotton of IBM provided significant input to ODBC 3.0
3.5: 1997
3.8: 2009, with Windows 7
4.0: Development announced June 2016 with first implementation with SQL Server 2017 released Sep 2017 and additional desktop drivers late 2018 final spec on Github
Desktop Database Drivers
1.0 (1993–08): Used the SIMBA query processor produced by PageAhead Software.
2.0 (1994–12): Used with ODBC 2.0.
3.0 (1995–10): Supports Windows 95 and Windows NT Workstation or NT Server 3.51. Only 32-bit drivers were included in this release.
3.5 (1996–10): Supports double-byte character set (DBCS), and accommodated the use of File data source names (DSNs). The Microsoft Access driver was released in an RISC version for use on Alpha platforms for Windows 95/98 and Windows NT 3.51 and later operating systems.
4.0 (late 1998): Support Microsoft Jet Engine Unicode format along with compatibility for ANSI format of earlier versions.
Drivers and Managers
Drivers
ODBC is based on the device driver model, where the driver encapsulates the logic needed to convert a standard set of commands and functions into the specific calls required by the underlying system. For instance, a printer driver presents a standard set of printing commands, the API, to applications using the printing system. Calls made to those APIs are converted by the driver into the format used by the actual hardware, say PostScript or PCL.
In the case of ODBC, the drivers encapsulate many functions that can be broken down into several broad categories. One set of functions is primarily concerned with finding, connecting to and disconnecting from the DBMS that driver talks to. A second set is used to send SQL commands from the ODBC system to the DBMS, converting or interpreting any commands that are not supported internally. For instance, a DBMS that does not support cursors can emulate this functionality in the driver. Finally, another set of commands, mostly used internally, is used to convert data from the DBMS's internal formats to a set of standardized ODBC formats, which are based on the C language formats.
An ODBC driver enables an ODBC-compliant application to use a data source, normally a DBMS. Some non-DBMS drivers exist, for such data sources as CSV files, by implementing a small DBMS inside the driver itself. ODBC drivers exist for most DBMSs, including Oracle, PostgreSQL, MySQL, Microsoft SQL Server (but not for the Compact aka CE edition), Mimer SQL, Sybase ASE, SAP HANA and IBM Db2. Because different technologies have different capabilities, most ODBC drivers do not implement all functionality defined in the ODBC standard. Some drivers offer extra functionality not defined by the standard.
Driver Manager
Device drivers are normally enumerated, set up and managed by a separate Manager layer, which may provide additional functionality. For instance, printing systems often include functionality to provide spooling functionality on top of the drivers, providing print spooling for any supported printer.
In ODBC the Driver Manager (DM) provides these features. The DM can enumerate the installed drivers and present this as a list, often in a GUI-based form.
But more important to the operation of the ODBC system is the DM's concept of a Data Source Name (DSN). DSNs collect additional information needed to connect to a specific data source, versus the DBMS itself. For instance, the same MySQL driver can be used to connect to any MySQL server, but the connection information to connect to a local private server is different from the information needed to connect to an internet-hosted public server. The DSN stores this information in a standardized format, and the DM provides this to the driver during connection requests. The DM also includes functionality to present a list of DSNs using human readable names, and to select them at run-time to connect to different resources.
The DM also includes the ability to save partially complete DSN's, with code and logic to ask the user for any missing information at runtime. For instance, a DSN can be created without a required password. When an ODBC application attempts to connect to the DBMS using this DSN, the system will pause and ask the user to provide the password before continuing. This frees the application developer from having to create this sort of code, as well as having to know which questions to ask. All of this is included in the driver and the DSNs.
Bridging configurations
A bridge is a special kind of driver: a driver that uses another driver-based technology.
ODBC-to-JDBC (ODBC-JDBC) bridges
An ODBC-JDBC bridge consists of an ODBC driver which uses the services of a JDBC driver to connect to a database. This driver translates ODBC function-calls into JDBC method-calls. Programmers usually use such a bridge when they lack an ODBC driver for some database but have access to a JDBC driver. Examples: OpenLink ODBC-JDBC Bridge, SequeLink ODBC-JDBC Bridge.
JDBC-to-ODBC (JDBC-ODBC) bridges
A JDBC-ODBC bridge consists of a JDBC driver which employs an ODBC driver to connect to a target database. This driver translates JDBC method calls into ODBC function calls. Programmers usually use such a bridge when a given database lacks a JDBC driver, but is accessible through an ODBC driver. Sun Microsystems included one such bridge in the JVM, but viewed it as a stop-gap measure while few JDBC drivers existed (The built-in JDBC-ODBC bridge was dropped from the JVM in Java 8). Sun never intended its bridge for production environments, and generally recommended against its use. independent data-access vendors deliver JDBC-ODBC bridges which support current standards for both mechanisms, and which far outperform the JVM built-in. Examples: OpenLink JDBC-ODBC Bridge, SequeLink JDBC-ODBC Bridge, ZappySys JDBC-ODBC Bridge.
OLE DB-to-ODBC bridges
An OLE DB-ODBC bridge consists of an OLE DB Provider which uses the services of an ODBC driver to connect to a target database. This provider translates OLE DB method calls into ODBC function calls. Programmers usually use such a bridge when a given database lacks an OLE DB provider, but is accessible through an ODBC driver. Microsoft ships one, MSDASQL.DLL, as part of the MDAC system component bundle, together with other database drivers, to simplify development in COM-aware languages (e.g. Visual Basic). Third parties have also developed such, notably OpenLink Software whose 64-bit OLE DB Provider for ODBC Data Sources filled the gap when Microsoft initially deprecated this bridge for their 64-bit OS. (Microsoft later relented, and 64-bit Windows starting with Windows Server 2008 and Windows Vista SP1 have shipped with a 64-bit version of MSDASQL.) Examples: OpenLink OLEDB-ODBC Bridge , SequeLink OLEDB-ODBC Bridge.
ADO.NET-to-ODBC bridges
An ADO.NET-ODBC bridge consists of an ADO.NET Provider which uses the services of an ODBC driver to connect to a target database. This provider translates ADO.NET method calls into ODBC function calls. Programmers usually use such a bridge when a given database lacks an ADO.NET provider, but is accessible through an ODBC driver. Microsoft ships one as part of the MDAC system component bundle, together with other database drivers, to simplify development in C#. Third parties have also developed such. Examples: OpenLink ADO.NET-ODBC Bridge, SequeLink ADO.NET-ODBC Bridge.
| Technology | Software development: General | null |
168738 | https://en.wikipedia.org/wiki/Uvula | Uvula | The uvula (: uvulas or uvulae), also known as the palatine uvula or staphyle, is a conic projection from the back edge of the middle of the soft palate, composed of connective tissue containing a number of racemose glands, and some muscular fibers. It also contains many serous glands, which produce thin saliva. It is only found in humans.
Structure
Muscle
The muscular part of the uvula () shortens and broadens the uvula. This changes the contour of the posterior part of the soft palate. This change in contour allows the soft palate to adapt closely to the posterior pharyngeal wall to help close the nasopharynx during swallowing.
Its muscles are controlled by the pharyngeal branch of the vagus nerve.
Variation
A bifid or bifurcated uvula is a split or cleft uvula. Newborns with cleft palate often also have a split uvula. The bifid uvula results from incomplete fusion of the palatine shelves but it is considered only a slight form of clefting. Bifid uvulas have less muscle in them than a normal uvula, which may cause recurring problems with middle ear infections. While swallowing, the soft palate is pushed backwards, preventing food and drink from entering the nasal cavity. If the soft palate cannot touch the back of the throat while swallowing, food and drink can enter the nasal cavity. Splitting of the uvula occurs infrequently but is the most common form of mouth and nose area cleavage among newborns. Bifid uvula occurs in about 2% of the general population, although some populations may have a high incidence, such as Native Americans who have a 10% rate.
Bifid uvula is a common symptom of the rare genetic syndrome Loeys–Dietz syndrome, which is associated with an increased risk of aortic aneurysm.
Function
During swallowing, the soft palate and the uvula move together to close off the nasopharynx, and prevent food from entering the nasal cavity.
It has also been proposed that the abundant amount of thin saliva produced by the uvula serves to keep the throat well lubricated.
It has a function in speech as well. In many languages, a range of consonant sounds, known as uvular consonants, are articulated by creating a constriction of airflow between the uvula and the back of the tongue. The voiced uvular trill, written in the International Phonetic Alphabet, is one example; it is used in French, Arabic and Hebrew, among other languages. It has been suggested that the uvula is an accessory speech organ.
Stimulation of the uvula also causes the gag reflex to initiate. This is often a problem for people with uvula piercings, and a common method of inducing vomiting.
It also acts as a food sensor/guard that aids in breathing between mouthfuls, stopping small pieces of food from being inhaled, leading to choking.
Clinical significance
Inflammation
At times, the mucous membrane around the uvula may swell, causing the uvula to expand 3–5 times its normal size. This condition is known as uvulitis. When the uvula touches the throat or tongue, it can cause sensations like gagging or choking, although there is no foreign matter present. This can cause problems with breathing, talking, and eating.
There are many theories about what causes the uvula to swell, including dehydration (e.g. from arid weather); excessive smoking or other inhaled irritants; snoring; allergic reaction; or a viral or bacterial infection. An aphthous ulcer which has formed on the uvula can also cause swelling and discomfort.
If the swelling is caused by dehydration, drinking fluids may improve the condition. If the cause is a bacterial infection, gargling salt water may help. However, it can also be a sign of other problems. Some people with a history of recurring uvulitis carry an epinephrine autoinjector to counteract symptoms of an attack. A swollen uvula is not normally life-threatening and subsides in a short time, typically within a day.
Snoring and sleep apnea
The uvula can also contribute to snoring or heavy breathing during sleep; having an elongated uvula can cause vibrations that lead to snoring. In some cases this can lead to sleep apnea, which may be treated by removal of the uvula or part of it if necessary, an operation known as uvulopalatopharyngoplasty (commonly referred to as UPPP, or UP3). However, this operation can also cause sleep apnea if scar tissue forms and the airspace in the velopharynx is decreased. The success of UPPP as a treatment for sleep apnea is unknown, but some research has shown 40–60% effectiveness in reducing symptoms. Typically apnea subsides for the short term, but returns over the medium to long term, and sometimes is worse than it was before the UPPP.
Velopharyngeal insufficiency
In a small number of people, the uvula does not close properly against the back of the throat, causing a condition known as velopharyngeal insufficiency. This causes "nasal" (or more properly "hyper-nasal") speech, where extra air comes down the nose, and the speaker is unable to say certain consonants, such as pronouncing like .
Nasal regurgitation
During swallowing, the soft palate and the uvula move superiorly to close off the nasopharynx, preventing food from entering the nasal cavity. When this process fails, the result is called nasal regurgitation. It is common in people with VPI, the myositides, and neuromuscular disease. Regurgitation of fluids in this way may also occur if a particularly high volume of liquid is regurgitated, or during vigorous coughing, for example being caused by the accidental inhalation of water. Due to the action of coughing preventing the uvula from blocking the nasopharynx, liquid may be expelled back through the nose.
Society and culture
In some parts of Africa, including Somalia, Ethiopia and Eritrea, the uvula or a section of it is ritually removed by a traditional healer. In this case, the uvula may be noticeably shortened. It is not thought to contribute to velopharyngeal inadequacy, except in cases where the tonsils have also been removed.
History
Etymology
In Latin, ūvula means "little grape", the diminutive form of ūva "grape" (of unknown origin). A swollen uvula was called ūva.
| Biology and health sciences | Human anatomy | Health |
168752 | https://en.wikipedia.org/wiki/Palate | Palate | The palate () is the roof of the mouth in humans and other mammals. It separates the oral cavity from the nasal cavity. A similar structure is found in crocodilians, but in most other tetrapods, the oral and nasal cavities are not truly separated. The palate is divided into two parts, the anterior, bony hard palate and the posterior, fleshy soft palate (or velum).
Structure
Innervation
The maxillary nerve branch of the trigeminal nerve supplies sensory innervation to the palate.
Development
The hard palate forms before birth.
Variation
If the fusion is incomplete, a cleft palate results.
Function in humans
When functioning in conjunction with other parts of the mouth, the palate produces certain sounds, particularly velar, palatal, palatalized, postalveolar, alveolopalatal, and uvular consonants.
History
Etymology
The English synonyms palate and palatum, and also the related adjective palatine (as in palatine bone), are all from the Latin palatum via Old French palat, words that like their English derivatives, refer to the "roof" of the mouth.
The Latin word palatum is of unknown (possibly Etruscan) ultimate origin and served also as a source to the Latin word meaning palace, palatium, from which other senses of palatine and the English word palace derive, and not the other way round.
As the roof of the mouth was once considered the seat of the sense of taste, palate can also refer to this sense itself, as in the phrase "a discriminating palate". By further extension, the flavor of a food (particularly beer or wine) may be called its palate, as when a wine is said to have an oaky palate.
| Biology and health sciences | Gastrointestinal tract | Biology |
168848 | https://en.wikipedia.org/wiki/Human%20skeleton | Human skeleton | The human skeleton is the internal framework of the human body. It is composed of around 270 bones at birth – this total decreases to around 206 bones by adulthood after some bones get fused together. The bone mass in the skeleton makes up about 14% of the total body weight (ca. 10–11 kg for an average person) and reaches maximum mass between the ages of 25 and 30. The human skeleton can be divided into the axial skeleton and the appendicular skeleton. The axial skeleton is formed by the vertebral column, the rib cage, the skull and other associated bones. The appendicular skeleton, which is attached to the axial skeleton, is formed by the shoulder girdle, the pelvic girdle and the bones of the upper and lower limbs.
The human skeleton performs six major functions: support, movement, protection, production of blood cells, storage of minerals, and endocrine regulation.
The human skeleton is not as sexually dimorphic as that of many other primate species, but subtle differences between sexes in the morphology of the skull, dentition, long bones, and pelvis exist. In general, female skeletal elements tend to be smaller and less robust than corresponding male elements within a given population. The human female pelvis is also different from that of males in order to facilitate childbirth. Unlike most primates, human males do not have penile bones.
Divisions
Axial
The axial skeleton (80 bones) is formed by the vertebral column (32–34 bones; the number of the vertebrae differs from human to human as the lower 2 parts, sacral and coccygeal bone may vary in length), a part of the rib cage (12 pairs of ribs and the sternum), and the skull (22 bones and 7 associated bones).
The upright posture of humans is maintained by the axial skeleton, which transmits the weight from the head, the trunk, and the upper extremities down to the lower extremities at the hip joints. The bones of the spine are supported by many ligaments. The erector spinae muscles are also supporting and are useful for balance.
Appendicular
The appendicular skeleton (126 bones) is formed by the pectoral girdles, the upper limbs, the pelvic girdle or pelvis, and the lower limbs. Their functions are to make locomotion possible and to protect the major organs of digestion, excretion and reproduction.
Functions
The skeleton serves six major functions: support, movement, protection, production of blood cells, storage of minerals and endocrine regulation.
Support
The skeleton provides the framework which supports the body and maintains its shape. The pelvis, associated ligaments and muscles provide a floor for the pelvic structures. Without the rib cages, costal cartilages, and intercostal muscles, the lungs would collapse.
Movement
The joints between bones allow movement, some allowing a wider range of movement than others, e.g. the ball and socket joint allows a greater range of movement than the pivot joint at the neck. Movement is powered by skeletal muscles, which are attached to the skeleton at various sites on bones. Muscles, bones, and joints provide the principal mechanics for movement, all coordinated by the nervous system.
It is believed that the reduction of human bone density in prehistoric times reduced the agility and dexterity of human movement. Shifting from hunting to agriculture has caused human bone density to reduce significantly.
Protection
The skeleton helps to protect many vital internal organs from being damaged.
The skull protects the brain
The vertebrae protect the spinal cord.
The rib cage, spine, and sternum protect the lungs, heart and major blood vessels.
Blood cell production
The skeleton is the site of haematopoiesis, the development of blood cells that takes place in the bone marrow. In children, haematopoiesis occurs primarily in the marrow of the long bones such as the femur and tibia. In adults, it occurs mainly in the pelvis, cranium, vertebrae, and sternum.
Storage
The bone matrix can store calcium and is involved in calcium metabolism, and bone marrow can store iron in ferritin and is involved in iron metabolism. However, bones are not entirely made of calcium, but a mixture of chondroitin sulfate and hydroxyapatite, the latter making up 70% of a bone. Hydroxyapatite is in turn composed of 39.8% of calcium, 41.4% of oxygen, 18.5% of phosphorus, and 0.2% of hydrogen by mass. Chondroitin sulfate is a sugar made up primarily of oxygen and carbon.
Endocrine regulation
Bone cells release a hormone called osteocalcin, which contributes to the regulation of blood sugar (glucose) and fat deposition. Osteocalcin increases both insulin secretion and sensitivity, in addition to boosting the number of insulin-producing cells and reducing stores of fat.
Sex differences
Anatomical differences between human males and females are highly pronounced in some soft tissue areas, but tend to be limited in the skeleton. The human skeleton is not as sexually dimorphic as that of many other primate species, but subtle differences between sexes in the morphology of the skull, dentition, long bones, and pelvis are exhibited across human populations. In general, female skeletal elements tend to be smaller and less robust than corresponding male elements within a given population. It is not known whether or to what extent those differences are genetic or environmental.
Skull
A variety of gross morphological traits of the human skull demonstrate sexual dimorphism, such as the median nuchal line, mastoid processes, supraorbital margin, supraorbital ridge, and the chin.
Dentition
Human inter-sex dental dimorphism centers on the canine teeth, but it is not nearly as pronounced as in the other great apes.
Long bones
Long bones are generally larger in males than in females within a given population. Muscle attachment sites on long bones are often more robust in males than in females, reflecting a difference in overall muscle mass and development between sexes. Sexual dimorphism in the long bones is commonly characterized by morphometric or gross morphological analyses.
Pelvis
The human pelvis exhibits greater sexual dimorphism than other bones, specifically in the size and shape of the pelvic cavity, ilia, greater sciatic notches, and the sub-pubic angle. The Phenice method is commonly used to determine the sex of an unidentified human skeleton by anthropologists with 96% to 100% accuracy in some populations.
Women's pelvises are wider in the pelvic inlet and are wider throughout the pelvis to allow for child birth. The sacrum in the women's pelvis is curved inwards to allow the child to have a "funnel" to assist in the child's pathway from the uterus to the birth canal.
Clinical significance
There are many classified skeletal disorders. One of the most common is osteoporosis. Also common is scoliosis, a side-to-side curve in the back or spine, often creating a pronounced "C" or "S" shape when viewed on an x-ray of the spine. This condition is most apparent during adolescence, and is most common with females.
Arthritis
Arthritis is a disorder of the joints. It involves inflammation of one or more joints. When affected by arthritis, the joint or joints affected may be painful to move, may move in unusual directions or may be immobile completely. The symptoms of arthritis will vary differently between types of arthritis. The most common form of arthritis, osteoarthritis, can affect both the larger and smaller joints of the human skeleton. The cartilage in the affected joints will degrade, soften and wear away. This decreases the mobility of the joints and decreases the space between bones where cartilage should be.
Osteoporosis
Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined by the World Health Organization in women as a bone mineral density 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average, as measured by dual energy X-ray absorptiometry, with the term "established osteoporosis" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called "postmenopausal osteoporosis", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and be at risk of fracture.
Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium supplements may also be advised, as may vitamin D. When medication is used, it may include bisphosphonates, strontium ranelate, and osteoporosis may be one factor considered when commencing hormone replacement therapy.
History
India
The Sushruta Samhita, composed between the 6th century BCE and 5th century CE speaks of 360 bones. Books on Salya-Shastra (surgical science) know of only 300. The text then lists the total of 300 as follows: 120 in the extremities (e.g. hands, legs), 117 in the pelvic area, sides, back, abdomen and breast, and 63 in the neck and upwards. The text then explains how these subtotals were empirically verified. The discussion shows that the Indian tradition nurtured diversity of thought, with Sushruta school reaching its own conclusions and differing from the Atreya-Caraka tradition. The differences in the count of bones in the two schools is partly because Charaka Samhita includes 32 tooth sockets in its count, and their difference of opinions on how and when to count a cartilage as bone (which both sometimes do, unlike modern anatomy).
Hellenistic world
The study of bones in ancient Greece started under Ptolemaic kings due to their link to Egypt. Herophilos, through his work by studying dissected human corpses in Alexandria, is credited to be the pioneer of the field. His works are lost but are often cited by notable persons in the field such as Galen and Rufus of Ephesus. Galen himself did little dissection though and relied on the work of others like Marinus of Alexandria, as well as his own observations of gladiator cadavers and animals. According to Katherine Park, in medieval Europe dissection continued to be practiced, contrary to the popular understanding that such practices were taboo and thus completely banned. The practice of holy autopsy, such as in the case of Clare of Montefalco further supports the claim. Alexandria continued as a center of anatomy under Islamic rule, with Ibn Zuhr a notable figure. Chinese understandings are divergent, as the closest corresponding concept in the medicinal system seems to be the meridians, although given that Hua Tuo regularly performed surgery, there may be some distance between medical theory and actual understanding.
Renaissance
Leonardo da Vinci made studies of the skeleton, albeit unpublished in his time. Many artists, Antonio del Pollaiuolo being the first, performed dissections for better understanding of the body, although they concentrated mostly on the muscles. Vesalius, regarded as the founder of modern anatomy, authored the book De humani corporis fabrica, which contained many illustrations of the skeleton and other body parts, correcting some theories dating from Galen, such as the lower jaw being a single bone instead of two. Various other figures like Alessandro Achillini also contributed to the further understanding of the skeleton.
18th century
As early as 1797, the death goddess or folk saint known as Santa Muerte has been represented as a skeleton.
| Biology and health sciences | Human anatomy | null |
168859 | https://en.wikipedia.org/wiki/Skull | Skull | The skull, or cranium, is typically a bony enclosure around the brain of a vertebrate. In some fish, and amphibians, the skull is of cartilage. The skull is at the head end of the vertebrate.
In the human the skull comprises two prominent parts: the neurocranium, and the facial skeleton. which evolved from the first pharyngeal arch. The skull forms the frontmost portion of the axial skeleton and is a product of cephalization and vesicular enlargement of the brain, with several special senses structures such as the eyes, ears, nose, tongue and in fish specialized tactile organs such as barbels near the mouth.
The skull is composed of three types of bone: cranial bones, facial bones and ossicles, which is made up of a number of fused flat and irregular bones. The cranial bones are joined at firm fibrous junctions called sutures and contains many foramina, fossae, processes, and sinuses. In zoology, the openings in the skull are called fenestrae, the most prominent of which is the foramen magnum, where the brainstem goes through to join the spinal cord.
In human anatomy, the neurocranium (or braincase), is further divided into the calvarium and the endocranium, together forming a cranial cavity that houses the brain. The interior periosteum forms part of the dura mater, the facial skeleton and splanchnocranium with the mandible being its largest bone. The mandible articulates with the temporal bones of the neurocranium at the paired temporomandibular joints. The skull itself articulates with the spinal column at the atlanto-occipital joint.
Functions of the skull include physical protection for the brain, providing attachments for neck muscles, facial muscles and muscles of mastication, providing fixed eye sockets and outer ears (ear canals and auricles) to enable stereoscopic vision and sound localisation, forming nasal and oral cavities that allow better olfaction, taste and digestion, and contributing to phonation by acoustic resonance within the cavities and sinuses. In some animals such as ungulates and elephants, the skull also has a function in anti-predator defense and sexual selection by providing the foundation for horns, antlers and tusks.
The English word skull is probably derived from Old Norse , while the Latin word comes from the Greek root (). The human skull fully develops two years after birth.
Structure
Humans
The human skull is the bone structure that forms the head in the human skeleton. It supports the structures of the face and forms a cavity for the brain. Like the skulls of other vertebrates, it protects the brain from injury.
The skull consists of three parts, of different embryological origin—the neurocranium, the sutures, and the facial skeleton. The neurocranium (or braincase) forms the protective cranial cavity that surrounds and houses the brain and brainstem. The upper areas of the cranial bones form the calvaria (skullcap). The facial skeleton (membranous viscerocranium) is formed by the bones supporting the face, and includes the mandible.
The bones of the skull are joined by fibrous joints known as sutures—synarthrodial (immovable) joints formed by bony ossification, with Sharpey's fibres permitting some flexibility. Sometimes there can be extra bone pieces within the suture known as Wormian bones or sutural bones. Most commonly these are found in the course of the lambdoid suture.
Bones
The human skull is generally considered to consist of 22 bones—eight cranial bones and fourteen facial skeleton bones. In the neurocranium these are the occipital bone, two temporal bones, two parietal bones, the sphenoid, ethmoid and frontal bones.
The bones of the facial skeleton (14) are the vomer, two inferior nasal conchae, two nasal bones, two maxilla, the mandible, two palatine bones, two zygomatic bones, and two lacrimal bones. Some sources count a paired bone as one, or the maxilla as having two bones (as its parts); some sources include the hyoid bone or the three ossicles of the middle ear, the malleus, incus, and stapes, but the overall general consensus of the number of bones in the human skull is the stated twenty-two.
Some of these bones—the occipital, parietal, frontal, in the neurocranium, and the nasal, lacrimal, and vomer, in the facial skeleton are flat bones.
Cavities and foramina
The skull also contains sinuses, air-filled cavities known as paranasal sinuses, and numerous foramina. The sinuses are lined with respiratory epithelium. Their known functions are the lessening of the weight of the skull, the aiding of resonance to the voice and the warming and moistening of the air drawn into the nasal cavity.
The foramina are openings in the skull. The largest of these is the foramen magnum, of the occipital bone, that allows the passage of the spinal cord as well as nerves and blood vessels.
Processes
The many processes of the skull include the mastoid process and the zygomatic processes.
Other vertebrates
Fenestrae
Bones
The jugal is a skull bone that found in most of the reptiles, amphibians and birds. In mammals, the jugal is often called the zygomatic bone or malar bone.
The prefrontal bone is a bone that separates the lacrimal and frontal bones in many tetrapod skulls.
Fish
The skull of fish is formed from a series of only loosely connected bones. Lampreys and sharks only possess a cartilaginous endocranium, with both the upper jaw and the lower jaws being separate elements. Bony fishes have additional dermal bone, forming a more or less coherent skull roof in lungfish and holost fish. The lower jaw defines the chin.
The simpler structure is found in jawless fish, in which the cranium is normally represented by a trough-like basket of cartilaginous elements only partially enclosing the brain, and associated with the capsules for the inner ears and the single nostril. Distinctively, these fish have no jaws.
Cartilaginous fish, such as sharks and rays, have also simple, and presumably primitive, skull structures. The cranium is a single structure forming a case around the brain, enclosing the lower surface and the sides, but always at least partially open at the top as a large fontanelle. The most anterior part of the cranium includes a forward plate of cartilage, the rostrum, and capsules to enclose the olfactory organs. Behind these are the orbits, and then an additional pair of capsules enclosing the structure of the inner ear. Finally, the skull tapers towards the rear, where the foramen magnum lies immediately above a single condyle, articulating with the first vertebra. There are, in addition, at various points throughout the cranium, smaller foramina for the cranial nerves. The jaws consist of separate hoops of cartilage, almost always distinct from the cranium proper.
In ray-finned fish, there has also been considerable modification from the primitive pattern. The roof of the skull is generally well formed, and although the exact relationship of its bones to those of tetrapods is unclear, they are usually given similar names for convenience. Other elements of the skull, however, may be reduced; there is little cheek region behind the enlarged orbits, and little, if any bone in between them. The upper jaw is often formed largely from the premaxilla, with the maxilla itself located further back, and an additional bone, the symplectic, linking the jaw to the rest of the cranium.
Although the skulls of fossil lobe-finned fish resemble those of the early tetrapods, the same cannot be said of those of the living lungfishes. The skull roof is not fully formed, and consists of multiple, somewhat irregularly shaped bones with no direct relationship to those of tetrapods. The upper jaw is formed from the pterygoids and vomers alone, all of which bear teeth. Much of the skull is formed from cartilage, and its overall structure is reduced.
Tetrapods
The skulls of the earliest tetrapods closely resembled those of their ancestors amongst the lobe-finned fishes. The skull roof is formed of a series of plate-like bones, including the maxilla, frontals, parietals, and lacrimals, among others. It is overlaying the endocranium, corresponding to the cartilaginous skull in sharks and rays. The various separate bones that compose the temporal bone of humans are also part of the skull roof series. A further plate composed of four pairs of bones forms the roof of the mouth; these include the vomer and palatine bones. The base of the cranium is formed from a ring of bones surrounding the foramen magnum and a median bone lying further forward; these are homologous with the occipital bone and parts of the sphenoid in mammals. Finally, the lower jaw is composed of multiple bones, only the most anterior of which (the dentary) is homologous with the mammalian mandible.
In living tetrapods, a great many of the original bones have either disappeared or fused into one another in various arrangements.
Birds
Birds have a diapsid skull, as in reptiles, with a prelacrimal fossa (present in some reptiles). The skull has a single occipital condyle. The skull consists of five major bones: the frontal (top of head), parietal (back of head), premaxillary and nasal (top beak), and the mandible (bottom beak). The skull of a normal bird usually weighs about 1% of the bird's total bodyweight. The eye occupies a considerable amount of the skull and is surrounded by a sclerotic eye-ring, a ring of tiny bones. This characteristic is also seen in reptiles.
Amphibians
Living amphibians typically have greatly reduced skulls, with many of the bones either absent or wholly or partly replaced by cartilage. In mammals and birds, in particular, modifications of the skull occurred to allow for the expansion of the brain. The fusion between the various bones is especially notable in birds, in which the individual structures may be difficult to identify.
Development
The skull is a complex structure; its bones are formed both by intramembranous and endochondral ossification. The skull roof bones, comprising the bones of the facial skeleton and the sides and roof of the neurocranium, are dermal bones formed by intramembranous ossification, though the temporal bones are formed by endochondral ossification. The endocranium, the bones supporting the brain (the occipital, sphenoid, and ethmoid) are largely formed by endochondral ossification. Thus frontal and parietal bones are purely membranous. The geometry of the skull base and its fossae, the anterior, middle and posterior cranial fossae changes rapidly. The anterior cranial fossa changes especially during the first trimester of pregnancy and skull defects can often develop during this time.
At birth, the human skull is made up of 44 separate bony elements. During development, many of these bony elements gradually fuse together into solid bone (for example, the frontal bone). The bones of the roof of the skull are initially separated by regions of dense connective tissue called fontanelles. There are six fontanelles: one anterior (or frontal), one posterior (or occipital), two sphenoid (or anterolateral), and two mastoid (or posterolateral). At birth, these regions are fibrous and moveable, necessary for birth and later growth. This growth can put a large amount of tension on the "obstetrical hinge", which is where the squamous and lateral parts of the occipital bone meet. A possible complication of this tension is rupture of the great cerebral vein. As growth and ossification progress, the connective tissue of the fontanelles is invaded and replaced by bone creating sutures. The five sutures are the two squamous sutures, one coronal, one lambdoid, and one sagittal suture. The posterior fontanelle usually closes by eight weeks, but the anterior fontanel can remain open up to eighteen months. The anterior fontanelle is located at the junction of the frontal and parietal bones; it is a "soft spot" on a baby's forehead. Careful observation will show that you can count a baby's heart rate by observing the pulse pulsing softly through the anterior fontanelle.
The skull in the neonate is large in proportion to other parts of the body. The facial skeleton is one seventh of the size of the calvaria. (In the adult it is half the size). The base of the skull is short and narrow, though the inner ear is almost adult size.
Clinical significance
Craniosynostosis is a condition in which one or more of the fibrous sutures in an infant skull prematurely fuses, and changes the growth pattern of the skull. Because the skull cannot expand perpendicular to the fused suture, it grows more in the parallel direction. Sometimes the resulting growth pattern provides the necessary space for the growing brain, but results in an abnormal head shape and abnormal facial features. In cases in which the compensation does not effectively provide enough space for the growing brain, craniosynostosis results in increased intracranial pressure leading possibly to visual impairment, sleeping impairment, eating difficulties, or an impairment of mental development.
A copper beaten skull is a phenomenon wherein intense intracranial pressure disfigures the internal surface of the skull. The name comes from the fact that the inner skull has the appearance of having been beaten with a ball-peen hammer, such as is often used by coppersmiths. The condition is most common in children.
Injuries and treatment
Injuries to the brain can be life-threatening. Normally the skull protects the brain from damage through its high resistance to deformation; the skull is one of the least deformable structures found in nature, needing the force of about 1 ton to reduce its diameter by 1 cm. In some cases of head injury, however, there can be raised intracranial pressure through mechanisms such as a subdural haematoma. In these cases, the raised intracranial pressure can cause herniation of the brain out of the foramen magnum ("coning") because there is no space for the brain to expand; this can result in significant brain damage or death unless an urgent operation is performed to relieve the pressure. This is why patients with concussion must be watched extremely carefully. Repeated concussions can activate the structure of skull bones as the brain's protective covering.
Dating back to Neolithic times, a skull operation called trepanning was sometimes performed. This involved drilling a burr hole in the cranium. Examination of skulls from this period reveals that the patients sometimes survived for many years afterward. It seems likely that trepanning was also performed purely for ritualistic or religious reasons. Nowadays this procedure is still used but is normally called a craniectomy.
In March 2013, for the first time in the U.S., researchers replaced a large percentage of a patient's skull with a precision, 3D-printed polymer implant. About 9 months later, the first complete cranium replacement with a 3D-printed plastic insert was performed on a Dutch woman. She had been suffering from hyperostosis, which increased the thickness of her skull and compressed her brain.
A study conducted in 2018 by the researchers of Harvard Medical School in Boston, funded by National Institutes of Health (NIH), suggested that instead of travelling via blood, there are "tiny channels" in the skull through which the immune cells combined with the bone marrow reach the areas of inflammation after an injury to the brain tissues.
Transgender procedures
Surgical alteration of sexually dimorphic skull features may be carried out as a part of facial feminization surgery or facial masculinization surgery, these reconstructive surgical procedures that can alter sexually dimorphic facial features to bring them closer in shape and size to facial features of the desired sex. These procedures can be an important part of the treatment of transgender people for gender dysphoria.
Society and culture
Artificial cranial deformation is a largely historical practice of some cultures. Cords and wooden boards would be used to apply pressure to an infant's skull and alter its shape, sometimes quite significantly. This procedure would begin just after birth and would be carried on for several years.
Osteology
Like the face, the skull and teeth can also indicate a person's life history and origin. Forensic scientists and archaeologists use quantitative and qualitative traits to estimate what the bearer of the skull looked like. When a significant amount of bones are found, such as at Spitalfields in the UK and Jōmon shell mounds in Japan, osteologists can use traits, such as the proportions of length, height and width, to know the relationships of the population of the study with other living or extinct populations.
The German physician Franz Joseph Gall in around 1800 formulated the theory of phrenology, which attempted to show that specific features of the skull are associated with certain personality traits or intellectual capabilities of its owner. His theory is now considered to be pseudoscientific.
Sexual dimorphism
In the mid-nineteenth century, anthropologists found it crucial to distinguish between male and female skulls. An anthropologist of the time, James McGrigor Allan, argued that the female brain was similar to that of an animal. This allowed anthropologists to declare that women were in fact more emotional and less rational than men. McGrigor then concluded that women's brains were more analogous to infants, thus deeming them inferior at the time. To further these claims of female inferiority and silence the feminists of the time, other anthropologists joined in on the studies of the female skull. These cranial measurements are the basis of what is known as craniology. These cranial measurements were also used to draw a connection between women and black people.
Research has shown that while in early life there is little difference between male and female skulls, in adulthood male skulls tend to be larger and more robust than female skulls, which are lighter and smaller, with a cranial capacity about 10 percent less than that of the male. However, later studies show that women's skulls are slightly thicker and thus men may be more susceptible to head injury than women. However, other studies shows that men's skulls are slightly thicker in certain areas. Some studies show that females are more susceptible to concussion than males. Men's skulls have also been shown to maintain density with age, which may aid in preventing head injury, while women's skull density slightly decreases with age.
Male skulls can all have more prominent supraorbital ridges, glabella, and temporal lines. Female skulls generally have rounder orbits and narrower jaws. Male skulls on average have larger, broader palates, squarer orbits, larger mastoid processes, larger sinuses, and larger occipital condyles than those of females. Male mandibles typically have squarer chins and thicker, rougher muscle attachments than female mandibles.
Craniometry
The cephalic index is the ratio of the width of the head, multiplied by 100 and divided by its length (front to back). The index is also used to categorize animals, especially dogs and cats. The width is usually measured just below the parietal eminence, and the length from the glabella to the occipital point.
Humans may be:
Dolichocephalic — long-headed
Mesaticephalic — medium-headed
Brachycephalic — short-headed
The vertical cephalic index refers to the ratio between the height of the head multiplied by 100 and divided by the length of the head.
Humans may be:
Chamaecranic — low-skulled
Orthocranic — medium high-skulled
Hypsicranic — high-skulled
Terminology
Chondrocranium, a primitive cartilaginous skeletal structure
Endocranium
Epicranium
Pericranium, a membrane that lines the outer surface of the cranium
History
Trepanning, a practice in which a hole is created in the skull, has been described as the oldest surgical procedure for which there is archaeological evidence, found in the forms of cave paintings and human remains. At one burial site in France dated to 6500 BCE, 40 out of 120 prehistoric skulls found had trepanation holes.
Additional images
| Biology and health sciences | Skeletal system | null |
168865 | https://en.wikipedia.org/wiki/Corollary | Corollary | In mathematics and logic, a corollary ( , ) is a theorem of less importance which can be readily deduced from a previous, more notable statement. A corollary could, for instance, be a proposition which is incidentally proved while proving another proposition; it might also be used more casually to refer to something which naturally or incidentally accompanies something else.
Overview
In mathematics, a corollary is a theorem connected by a short proof to an existing theorem. The use of the term corollary, rather than proposition or theorem, is intrinsically subjective. More formally, proposition B is a corollary of proposition A, if B can be readily deduced from A or is self-evident from its proof.
In many cases, a corollary corresponds to a special case of a larger theorem, which makes the theorem easier to use and apply, even though its importance is generally considered to be secondary to that of the theorem. In particular, B is unlikely to be termed a corollary if its mathematical consequences are as significant as those of A. A corollary might have a proof that explains its derivation, even though such a derivation might be considered rather self-evident in some occasions (e.g., the Pythagorean theorem as a corollary of law of cosines).
Peirce's theory of deductive reasoning
Charles Sanders Peirce held that the most important division of kinds of deductive reasoning is that between corollarial and theorematic. He argued that while all deduction ultimately depends in one way or another on mental experimentation on schemata or diagrams, in corollarial deduction:
"It is only necessary to imagine any case in which the premises are true in order to perceive immediately that the conclusion holds in that case"
while in theorematic deduction:
"It is necessary to experiment in the imagination upon the image of the premise in order from the result of such experiment to make corollarial deductions to the truth of the conclusion."
Peirce also held that corollarial deduction matches Aristotle's conception of direct demonstration, which Aristotle regarded as the only thoroughly satisfactory demonstration, while theorematic deduction is:
The kind more prized by mathematicians
Peculiar to mathematics
Involves in its course the introduction of a lemma or at least a definition uncontemplated in the thesis (the proposition that is to be proved), in remarkable cases that definition is of an abstraction that "ought to be supported by a proper postulate."
| Mathematics | Basics | null |
168907 | https://en.wikipedia.org/wiki/Na%C3%AFve%20physics | Naïve physics | Naïve physics or folk physics is the untrained human perception of basic physical phenomena. In the field of artificial intelligence the study of naïve physics is a part of the effort to formalize the common knowledge of human beings.
Many ideas of folk physics are simplifications, misunderstandings, or misperceptions of well-understood phenomena, incapable of giving useful predictions of detailed experiments, or simply are contradicted by more thorough observations. They may sometimes be true, be true in certain limited cases, be true as a good first approximation to a more complex effect, or predict the same effect but misunderstand the underlying mechanism.
Naïve physics is characterized by a mostly intuitive understanding humans have about objects in the physical world. Certain notions of the physical world may be innate.
Examples
Some examples of naïve physics include commonly understood, intuitive, or everyday-observed rules of nature:
What goes up must come down
A dropped object falls straight down
A solid object cannot pass through another solid object
A vacuum sucks things towards it
An object is either at rest or moving, in an absolute sense
Two events are either simultaneous or they are not
Many of these and similar ideas formed the basis for the first works in formulating and systematizing physics by Aristotle and the medieval scholastics in Western civilization. In the modern science of physics, they were gradually contradicted by the work of Galileo, Newton, and others. The idea of absolute simultaneity survived until 1905, when the special theory of relativity and its supporting experiments discredited it.
Psychological research
The increasing sophistication of technology makes possible more research on knowledge acquisition. Researchers measure physiological responses such as heart rate and eye movement in order to quantify the reaction to a particular stimulus. Concrete physiological data is helpful when observing infant behavior, because infants cannot use words to explain things (such as their reactions) the way most adults or older children can.
Research in naïve physics relies on technology to measure eye gaze and reaction time in particular. Through observation, researchers know that infants get bored looking at the same stimulus after a certain amount of time. That boredom is called habituation. When an infant is sufficiently habituated to a stimulus, he or she will typically look away, alerting the experimenter to his or her boredom. At this point, the experimenter will introduce another stimulus. The infant will then dishabituate by attending to the new stimulus. In each case, the experimenter measures the time it takes for the infant to habituate to each stimulus.
As an example of the use of this method, research by Susan Hespos and colleagues studied five-month-old infants' responses to the physics of liquids and solids. Infants in this research were shown liquid being poured from one glass to another until they were habituated to the event. That is, they spent less time looking at this event. Then, the infants were shown an event in which the liquid turned to a solid, which tumbled from the glass rather flowed. The infants looked longer at the new event. That is, they dishabituated.
Researchers infer that the longer the infant takes to habituate to a new stimulus, the more it violates his or her expectations of physical phenomena. When an adult observes an optical illusion that seems physically impossible, they will attend to it until it makes sense.
It is commonly believed that our understanding of physical laws emerges strictly from experience. But research shows that infants, who do not yet have such expansive knowledge of the world, have the same extended reaction to events that appear physically impossible. Such studies hypothesize that all people are born with an innate ability to understand the physical world.
Smith and Casati (1994) have reviewed the early history of naïve physics, and especially the role of the Italian psychologist Paolo Bozzi.
Types of experiments
The basic experimental procedure of a study on naïve physics involves three steps: prediction of the infant's expectation, violation of that expectation, and measurement of the results. As mentioned above, the physically impossible event holds the infant's attention longer, indicating surprise when expectations are violated.
Solidity
An experiment that tests an infant's knowledge of solidity involves the impossible event of one solid object passing through another. First, the infant is shown a flat, solid square moving from 0° to 180° in an arch formation. Next, a solid block is placed in the path of the screen, preventing it from completing its full range of motion. The infant habituates to this event, as it is what anyone would expect. Then, the experimenter creates the impossible event, and the solid screen passes through the solid block. The infant is confused by the event and attends longer than in probable event trial.
Occlusion
An occlusion event tests the knowledge that an object exists even if it is not immediately visible. Jean Piaget originally called this concept object permanence. When Piaget formed his developmental theory in the 1950s, he claimed that object permanence is learned, not innate. The children's game peek-a-boo is a classic example of this phenomenon, and one which obscures the true grasp infants have on permanence. To disprove this notion, an experimenter designs an impossible occlusion event. The infant is shown a block and a transparent screen. The infant habituates, then a solid panel is placed in front of the objects to block them from view. When the panel is removed, the block is gone, but the screen remains. The infant is confused because the block has disappeared indicating that they understand that objects maintain location in space and do not simply disappear.
Containment
A containment event tests the infant's recognition that an object that is bigger than a container cannot fit completely into that container. Elizabeth Spelke, one of the psychologists who founded the naïve physics movement, identified the continuity principle, which conveys an understanding that objects exist continuously in time and space. Both occlusion and containment experiments hinge on the continuity principle. In the experiment, the infant is shown a tall cylinder and a tall cylindrical container. The experimenter demonstrates that the tall cylinder fits into the tall container, and the infant is bored by the expected physical outcome. The experimenter then places the tall cylinder completely into a much shorter cylindrical container, and the impossible event confuses the infant. Extended attention demonstrates the infant's understanding that containers cannot hold objects that exceed them in height.
Baillargeon's research
The published findings of Renee Baillargeon brought innate knowledge to the forefront in psychological research. Her research method centered on the visual preference technique. Baillargeon and her followers studied how infants show preference to one stimulus over another. Experimenters judge preference by the length of time an infant will stare at a stimulus before habituating. Researchers believe that preference indicates the infant's ability to discriminate between the two events.
| Physical sciences | Physics basics: General | Physics |
168927 | https://en.wikipedia.org/wiki/Somatic%20cell%20nuclear%20transfer | Somatic cell nuclear transfer | In genetics and developmental biology, somatic cell nuclear transfer (SCNT) is a laboratory strategy for creating a viable embryo from a body cell and an egg cell. The technique consists of taking a denucleated oocyte (egg cell) and implanting a donor nucleus from a somatic (body) cell. It is used in both therapeutic and reproductive cloning. In 1996, Dolly the sheep became famous for being the first successful case of the reproductive cloning of a mammal. In January 2018, a team of scientists in Shanghai announced the successful cloning of two female crab-eating macaques (named Zhong Zhong and Hua Hua) from foetal nuclei.
"Therapeutic cloning" refers to the potential use of SCNT in regenerative medicine; this approach has been championed as an answer to the many issues concerning embryonic stem cells (ESCs) and the destruction of viable embryos for medical use, though questions remain on how homologous the two cell types truly are.
Introduction
Somatic cell nuclear transfer is a technique for cloning in which the nucleus of a somatic cell is transferred to the cytoplasm of an enucleated egg. After the somatic cell transfers, the cytoplasmic factors affect the nucleus to become a zygote. The blastocyst stage is developed by the egg to help create embryonic stem cells from the inner cell mass of the blastocyst. The first mammal to be developed by this technique was Dolly the sheep, in 1996.
Early 20th-Century
Although Dolly is generally recognized as the first animal to be cloned using this technique, earlier instances of SCNT exist as early as the 1950s. In particular, the research of Sir John Gurdon in 1958 entailed the cloning of Xenopus laevis utilizing the principles of SCNT. In short, the experiment consisted of inducing a female specimen to ovulate, at which point her eggs were harvested. From here, the egg was enucleated using ultra-violet irradiation to disable the egg's pronucleus. At this point, the prepared egg cell and nucleus from the donor cell were combined, and then incubation and eventual development into a tadpole proceeded. Gurdon's application of SCNT differs from more modern applications and even applications used on other model systems of the time (i.e., Rana pipiens) due to his usage of UV irradiation to enucleate the egg instead of using a pipette to remove the nucleus from the egg.
Process
The process of somatic cell nuclear transfer involves two different cells. The first being a female gamete, known as the ovum (egg/oocyte). In human SCNT experiments, these eggs are obtained through consenting donors, utilizing ovarian stimulation. The second being a somatic cell, referring to the cells of the human body. Skin cells, fat cells, and liver cells are only a few examples. The genetic material of the donor egg cell is removed and discarded, leaving it 'deprogrammed.' What is left is a somatic cell and an enucleated egg cell. These are then fused by inserting the somatic cell into the 'empty' ovum. After being inserted into the egg, the somatic cell nucleus is reprogrammed by its host egg cell. The ovum, now containing the somatic cell's nucleus, is stimulated with a shock and will begin to divide. The egg is now viable and capable of producing an adult organism containing all necessary genetic information from just one parent. Development will ensue normally and after many mitotic divisions, the single cell forms a blastocyst (an early stage embryo with about 100 cells) with an identical genome to the original organism (i.e. a clone). Stem cells can then be obtained by the destruction of this clone embryo for use in therapeutic cloning or in the case of reproductive cloning the clone embryo is implanted into a host mother for further development and brought to term.
Conventional SCNT requires the use of micromanipulators, which are expensive machines used to accurately manipulate cells. Using the micromanipulator, a scientist makes an opening in the zona pellucida and sucks out the egg cell's original nucleus using a pipette. They then make another opening to a different pipette to inject the donor nucleus. Alternatively, electric energy can be applied to fuse the empty egg cell with a donor cell containing a nucleus.
An alternative technique called "handmade cloning" was described by Indian scientists in 2001. This technique requires no use of a micromanipulator and has been used for the cloning of several livestock species. Removal of the nucleus can be done chemically, by centrifuge, or with the use of a blade. The empty egg is glued to the donor cell with phytohaemagglutinin, then fused using electricity. (If a blade is used, two fusion steps would be required: the first fusion is between the donor and an empty half-egg, the second between the half-size "demi-embryo" and another empty half-egg.)
Applications
Stem cell research
Somatic cell nuclear transplantation has become a focus of study in stem cell research. The aim of carrying out this procedure is to obtain pluripotent cells from a cloned embryo. These cells genetically matched the donor organism from which they came. This gives them the ability to create patient specific pluripotent cells, which could then be used in therapies or disease research.
Embryonic stem cells are undifferentiated cells of an embryo. These cells are deemed to have a pluripotent potential because they have the ability to give rise to all of the tissues found in an adult organism. This ability allows stem cells to create any cell type, which could then be transplanted to replace damaged or destroyed cells. Controversy surrounds human ESC work due to the destruction of viable human embryos, leading scientists to seek alternative methods of obtaining pluripotent stem cells, SCNT is one such method.
A potential use of stem cells genetically matched to a patient would be to create cell lines that have genes linked to a patient's particular disease. By doing so, an in vitro model could be created, would be useful for studying that particular disease, potentially discovering its pathophysiology, and discovering therapies. For example, if a person with Parkinson's disease donated their somatic cells, the stem cells resulting from SCNT would have genes that contribute to Parkinson's disease. The disease specific stem cell lines could then be studied in order to better understand the condition.
Another application of SCNT stem cell research is using the patient specific stem cell lines to generate tissues or even organs for transplant into the specific patient. The resulting cells would be genetically identical to the somatic cell donor, thus avoiding any complications from immune system rejection.
Only a handful of the labs in the world are currently using SCNT techniques in human stem cell research. In the United States, scientists at the Harvard Stem Cell Institute, the University of California San Francisco, the Oregon Health & Science University, Stemagen (La Jolla, CA) and possibly Advanced Cell Technology are currently researching a technique to use somatic cell nuclear transfer to produce embryonic stem cells. In the United Kingdom, the Human Fertilisation and Embryology Authority has granted permission to research groups at the Roslin Institute and the Newcastle Centre for Life. SCNT may also be occurring in China.
Though there has been numerous successes with cloning animals, questions remain concerning the mechanisms of reprogramming in the ovum. Despite many attempts, success in creating human nuclear transfer embryonic stem cells has been limited. There lies a problem in the human cell's ability to form a blastocyst; the cells fail to progress past the eight cell stage of development. This is thought to be a result from the somatic cell nucleus being unable to turn on embryonic genes crucial for proper development. These earlier experiments used procedures developed in non-primate animals with little success.
A research group from the Oregon Health & Science University demonstrated SCNT procedures developed for primates successfully using skin cells. The key to their success was utilizing oocytes in metaphase II (MII) of the cell cycle. Egg cells in MII contain special factors in the cytoplasm that have a special ability in reprogramming implanted somatic cell nuclei into cells with pluripotent states. When the ovum's nucleus is removed, the cell loses its genetic information. This has been blamed for why enucleated eggs are hampered in their reprogramming ability. It is theorized the critical embryonic genes are physically linked to oocyte chromosomes, enucleation negatively affects these factors. Another possibility is removing the egg nucleus or inserting the somatic nucleus causes damage to the cytoplast, affecting reprogramming ability.
Taking this into account the research group applied their new technique in an attempt to produce human SCNT stem cells. In May 2013, the Oregon group reported the successful derivation of human embryonic stem cell lines derived through SCNT, using fetal and infant donor cells. Using MII oocytes from volunteers and their improved SCNT procedure, human clone embryos were successfully produced. These embryos were of poor quality, lacking a substantial inner cell mass and poorly constructed trophectoderm. The imperfect embryos prevented the acquisition of human ESC. The addition of caffeine during the removal of the ovum's nucleus and fusion of the somatic cell and the egg improved blastocyst formation and ESC isolation. The ESC obtain were found to be capable of producing teratomas, expressed pluripotent transcription factors, and expressed a normal 46XX karyotype, indicating these SCNT were in fact ESC-like. This was the first instance of successfully using SCNT to reprogram human somatic cells. This study used fetal and infantile somatic cells to produce their ESC.
In April 2014, an international research team expanded on this break through. There remained the question of whether the same success could be accomplished using adult somatic cells. Epigenetic and age related changes were thought to possibly hinder an adult somatic cells ability to be reprogrammed. Implementing the procedure pioneered by the Oregon research group they indeed were able to grow stem cells generated by SCNT using adult cells from two donors aged 35 and 75, indicating that age does not impede a cell's ability to be reprogrammed.
Late April 2014, the New York Stem Cell Foundation was successful in creating SCNT stem cells derived from adult somatic cells. One of these lines of stem cells was derived from the donor cells of a type 1 diabetic. The group was then able to successfully culture these stem cells and induce differentiation. When injected into mice, cells of all three of the germ layers successfully formed. The most significant of these cells, were those who expressed insulin and were capable of secreting the hormone. These insulin producing cells could be used for replacement therapy in diabetics, demonstrating real SCNT stem cell therapeutic potential.
The impetus for SCNT-based stem cell research has been decreased by the development and improvement of alternative methods of generating stem cells. Methods to reprogram normal body cells into pluripotent stem cells were developed in humans in 2007. The following year, this method achieved a key goal of SCNT-based stem cell research: the derivation of pluripotent stem cell lines that have all genes linked to various diseases. Some scientists working on SCNT-based stem cell research have recently moved to the new methods of induced pluripotent stem cells. Though recent studies have put in question how similar iPS cells are to embryonic stem cells. Epigenetic memory in iPS affects the cell lineage it can differentiate into. For instance, an iPS cell derived from a blood cell using only the yamanaka factors will be more efficient at differentiating into blood cells, while it will be less efficient at creating a neuron. Recent studies indicate however that changes to the epigenetic memory of iPSCs using small molecules can reset them to an almost naive state of pluripotency. Studies have even shown that via tetraploid complementation, an entire viable organism can be created solely from iPSCs. SCNT stem cells have been found to have similar challenges. The cause for low yields in bovine SCNT cloning has, in recent years, been attributed to the previously hidden epigenetic memory of the somatic cells that were being introduced into the oocyte.
Reproductive cloning
This technique is currently the basis for cloning animals (such as the famous Dolly the sheep), and has been proposed as a possible way to clone humans. Using SCNT in reproductive cloning has proven difficult with limited success. High fetal and neonatal death make the process very inefficient. Resulting cloned offspring are also plagued with development and imprinting disorders in non-human species. For these reasons, along with moral and ethical objections, reproductive cloning in humans is proscribed in more than 30 countries. Most researchers believe that in the foreseeable future it will not be possible to use the current cloning technique to produce a human clone that will develop to term. It remains a possibility, though critical adjustments will be required to overcome current limitations during early embryonic development in human SCNT.
There is also the potential for treating diseases associated with mutations in mitochondrial DNA. Recent studies show SCNT of the nucleus of a body cell afflicted with one of these diseases into a healthy oocyte prevents the inheritance of the mitochondrial disease. This treatment does not involve cloning but would produce a child with three genetic parents. A father providing a sperm cell, one mother providing the egg nucleus, and another mother providing the enucleated egg cell.
In 2018, the first successful cloning of primates using somatic cell nuclear transfer, the same method as Dolly the sheep, with the birth of two live female clones (crab-eating macaques named Zhong Zhong and Hua Hua) was reported.
Interspecies nuclear transfer
Interspecies nuclear transfer (iSCNT) is a means of somatic cell nuclear transfer being used to facilitate the rescue of endangered species, or even to restore species after their extinction. The technique is similar to SCNT cloning which typically is between domestic animals and rodents, or where there is a ready supply of oocytes and surrogate animals. However, the cloning of highly endangered or extinct species requires the use of an alternative method of cloning. Interspecies nuclear transfer utilizes a host and a donor of two different organisms that are closely related species and within the same genus. In 2000, Robert Lanza was able to produce a cloned fetus of a gaur, Bos gaurus, combining it successfully with a domestic cow, Bos taurus.
In 2017, the first cloned Bactrian camel was born through iSCNT, using oocytes of dromedary camel and skin fibroblast cells of an adult Bactrian camel as donor nuclei.
Limitations
Somatic cell nuclear transfer (SCNT) can be inefficient due to stresses placed on both the egg cell and the introduced nucleus. This can result in a low percentage of successfully reprogrammed cells. For example, in 1996 Dolly the sheep was born after 277 eggs were used for SCNT, which created 29 viable embryos, giving it a measly 0.3% efficiency. Only three of these embryos survived until birth, and only one survived to adulthood. Millie, the offspring that survived, took 95 attempts to produce. Because the procedure was not automated and had to be performed manually under a microscope, SCNT was very resource intensive. Another reason why there is such high mortality rate with the cloned offspring is due to the fetus being larger than even other large offspring, resulting in death soon after birth. The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from understood. Another limitation is trying to use one-cell embryos during the SCNT. When using just one-cell cloned embryos, the experiment has a 65% chance to fail in the process of making morula or blastocyst. The biochemistry also has to be extremely precise, as most late term cloned fetus deaths are the result of inadequate placentation. However, by 2014, researchers were reporting success rates of 70-80% with cloning pigs and in 2016 a Korean company, Sooam Biotech, was reported to be producing 500 cloned embryos a day.
In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus. This fact may also hamper the potential benefits of SCNT-derived tissues and organs for therapy, as there may be an immuno-response to the non-self mtDNA after transplant. Additionally, the genes found in the mitochondria’s genome need to communicate with the cell’s genome and a failure of somatic cell nuclear reprogramming can lead to non communication to the cell’s genome causing SCNT to fail.
Epigenetic factors play an important role in the success or failure of SCNT attempts. The varying gene expression of a previously activated cell and its mRNAs may lead to overexpression, underexpression, or in some cases non functional genes which will affect the developing fetus. One such example of epigenetic limitations to SCNT is regulating histone methylation. Differing regulation of these histone methylation genes can directly affect the transcription of the developing genome, causing failure of the SCNT. Another contributing factor to failure of SCNT includes the X chromosome inactivation in early development of the embryo. A non coding gene called XIST is responsible for inactivating one X chromosome during development, however in SCNT this gene can have abnormal regulation causing mortality to the developing fetus.
Controversy
Nuclear transfer techniques present a different set of ethical considerations than those associated with the use of other stem cells like embryonic stem cells which are controversial for their requirement to destroy an embryo. These different considerations have led to some individuals and organizations who are not opposed to human embryonic stem cell research to be concerned about, or opposed to, SCNT research.
One concern is that blastula creation in SCNT-based human stem cell research will lead to the reproductive cloning of humans. Both processes use the same first step: the creation of a nuclear transferred embryo, most likely via SCNT. Those who hold this concern often advocate for strong regulation of SCNT to preclude implantation of any derived products for the intention of human reproduction, or its prohibition.
A second important concern is the appropriate source of the eggs that are needed. SCNT requires human egg cells, which can only be obtained from women. The most common source of these eggs today are eggs that are produced and in excess of the clinical need during IVF treatment. This is a minimally invasive procedure, but it does carry some health risks, such as ovarian hyperstimulation syndrome.
One vision for successful stem cell therapies is to create custom stem cell lines for patients. Each custom stem cell line would consist of a collection of identical stem cells each carrying the patient's own DNA, thus reducing or eliminating any problems with rejection when the stem cells were transplanted for treatment. For example, to treat a man with Parkinson's disease, a cell nucleus from one of his cells would be transplanted by SCNT into an egg cell from an egg donor, creating a unique lineage of stem cells almost identical to the patient's own cells. (There would be differences. For example, the mitochondrial DNA would be the same as that of the egg donor. In comparison, his own cells would carry the mitochondrial DNA of his mother.)
Potentially millions of patients could benefit from stem cell therapy, and each patient would require a large number of donated eggs in order to successfully create a single custom therapeutic stem cell line. Such large numbers of donated eggs would exceed the number of eggs currently left over and available from couples trying to have children through assisted reproductive technology. Therefore, healthy young women would need to be induced to sell eggs to be used in the creation of custom stem cell lines that could then be purchased by the medical industry and sold to patients. It is so far unclear where all these eggs would come from.
Stem cell experts consider it unlikely that such large numbers of human egg donations would occur in a developed country because of the unknown long-term public health effects of treating large numbers of healthy young women with heavy doses of hormones in order to induce hyper-ovulation (ovulating several eggs at once). Although such treatments have been performed for several decades now, the long-term effects have not been studied or declared safe to use on a large scale on otherwise healthy women. Longer-term treatments with much lower doses of hormones are known to increase the rate of cancer decades later. Whether hormone treatments to induce hyper-ovulation could have similar effects is unknown. There are also ethical questions surrounding paying for eggs. In general, marketing body parts is considered unethical and is banned in most countries. Human eggs have been a notable exception to this rule for some time.
To address the problem of creating a human egg market, some stem cell researchers are investigating the possibility of creating artificial eggs. If successful, human egg donations would not be needed to create custom stem cell lines. However, this technology may be a long way off.
Policies regarding human SCNT
SCNT involving human cells is currently legal for research purposes in the United Kingdom, having been incorporated into the Human Fertilisation and Embryology Act 1990. Permission must be obtained from the Human Fertilisation and Embryology Authority in order to perform or attempt SCNT.
In the United States, the practice remains legal, as it has not been addressed by federal law. However, in 2002, a moratorium on United States federal funding for SCNT prohibits funding the practice for the purposes of research. Thus, though legal, SCNT cannot be federally funded. American scholars have recently argued that because the product of SCNT is a clone embryo, rather than a human embryo, these policies are morally wrong and should be revised.
In 2003, the United Nations adopted a proposal submitted by Costa Rica, calling on member states to "prohibit all forms of human cloning in as much as they are incompatible with human dignity and the protection of human life." This phrase may include SCNT, depending on interpretation.
The Council of Europe's Convention on Human Rights and Biomedicine and its Additional Protocol to the Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine, on the Prohibition of Cloning Human Being appear to ban SCNT of human beings. Of the Council's 45 member states, the Convention has been signed by 31 and ratified by 18. The Additional Protocol has been signed by 29 member nations and ratified by 14.
| Technology | Biotechnology | null |
168986 | https://en.wikipedia.org/wiki/Glycogen | Glycogen | Glycogen is a multibranched polysaccharide of glucose that serves as a form of energy storage in animals, fungi, and bacteria. It is the main storage form of glucose in the human body.
Glycogen functions as one of three regularly used forms of energy reserves, creatine phosphate being for very short-term, glycogen being for short-term and the triglyceride stores in adipose tissue (i.e., body fat) being for long-term storage. Protein, broken down into amino acids, is seldom used as a main energy source except during starvation and glycolytic crisis (see bioenergetic systems).
In humans, glycogen is made and stored primarily in the cells of the liver and skeletal muscle. In the liver, glycogen can make up 5–6% of the organ's fresh weight: the liver of an adult, weighing 1.5 kg, can store roughly 100–120 grams of glycogen. In skeletal muscle, glycogen is found in a low concentration (1–2% of the muscle mass): the skeletal muscle of an adult weighing 70 kg stores roughly 400 grams of glycogen. Small amounts of glycogen are also found in other tissues and cells, including the kidneys, red blood cells, white blood cells, and glial cells in the brain. The uterus also stores glycogen during pregnancy to nourish the embryo.
The amount of glycogen stored in the body mostly depends on oxidative type 1 fibres, physical training, basal metabolic rate, and eating habits. Different levels of resting muscle glycogen are reached by changing the number of glycogen particles, rather than increasing the size of existing particles though most glycogen particles at rest are smaller than their theoretical maximum.
Approximately 4 grams of glucose are present in the blood of humans at all times; in fasting individuals, blood glucose is maintained constant at this level at the expense of glycogen stores, primarily from the liver (glycogen in skeletal muscle is mainly used as an immediate source of energy for that muscle rather than being used to maintain physiological blood glucose levels). Glycogen stores in skeletal muscle serve as a form of energy storage for the muscle itself; however, the breakdown of muscle glycogen impedes muscle glucose uptake from the blood, thereby increasing the amount of blood glucose available for use in other tissues. Liver glycogen stores serve as a store of glucose for use throughout the body, particularly the central nervous system. The human brain consumes approximately 60% of blood glucose in fasted, sedentary individuals.
Glycogen is an analogue of starch, a glucose polymer that functions as energy storage in plants. It has a structure similar to amylopectin (a component of starch), but is more extensively branched and compact than starch. Both are white powders in their dry state. Glycogen is found in the form of granules in the cytosol/cytoplasm in many cell types, and plays an important role in the glucose cycle. Glycogen forms an energy reserve that can be quickly mobilized to meet a sudden need for glucose, but one that is less compact than the energy reserves of triglycerides (lipids). As such it is also found as storage reserve in many parasitic protozoa.
Structure
Glycogen is a branched biopolymer consisting of linear chains of glucose residues with an average chain length of approximately 8–12 glucose units and 2,000-60,000 residues per one molecule of glycogen.
Like amylopectin, glucose units are linked together linearly by α(1→4) glycosidic bonds from one glucose to the next. Branches are linked to the chains from which they are branching off by α(1→6) glycosidic bonds between the first glucose of the new branch and a glucose on the stem chain.
Each glycogen is essentially a ball of glucose trees, with around 12 layers, centered on a glycogenin protein, with three kinds of glucose chains: A, B, and C. There is only one C-chain, attached to the glycogenin. This C-chain is formed by the self-glucosylation of the glycogenin, forming a short primer chain. From the C-chain grows out B-chains, and from B-chains branch out B- and A-chains. The B-chains have on average 2 branch points, while the A-chains are terminal, thus unbranched. On average, each chain has length 12, tightly constrained to be between 11 and 15. All A-chains reach the spherical surface of the glycogen.
Glycogen in muscle, liver, and fat cells is stored in a hydrated form, composed of three or four parts of water per part of glycogen associated with 0.45 millimoles (18 mg) of potassium per gram of glycogen.
Glucose is an osmotic molecule, and can have profound effects on osmotic pressure in high concentrations possibly leading to cell damage or death if stored in the cell without being modified. Glycogen is a non-osmotic molecule, so it can be used as a solution to storing glucose in the cell without disrupting osmotic pressure.
Functions
Liver
As a meal containing carbohydrates or protein is eaten and digested, blood glucose levels rise, and the pancreas secretes insulin. Blood glucose from the portal vein enters liver cells (hepatocytes). Insulin acts on the hepatocytes to stimulate the action of several enzymes, including glycogen synthase. Glucose molecules are added to the chains of glycogen as long as both insulin and glucose remain plentiful. In this postprandial or "fed" state, the liver takes in more glucose from the blood than it releases.
After a meal has been digested and glucose levels begin to fall, insulin secretion is reduced, and glycogen synthesis stops. When it is needed for energy, glycogen is broken down and converted again to glucose. Glycogen phosphorylase is the primary enzyme of glycogen breakdown. For the next 8–12 hours, glucose derived from liver glycogen is the primary source of blood glucose used by the rest of the body for fuel.
Glucagon, another hormone produced by the pancreas, in many respects serves as a countersignal to insulin. In response to insulin levels being below normal (when blood levels of glucose begin to fall below the normal range), glucagon is secreted in increasing amounts and stimulates both glycogenolysis (the breakdown of glycogen) and gluconeogenesis (the production of glucose from other sources).
Muscle
Muscle glycogen appears to function as a reserve of quickly available phosphorylated glucose, in the form of glucose-1-phosphate, for muscle cells. Glycogen contained within skeletal muscle cells are primarily in the form of β particles. Other cells that contain small amounts use it locally as well. As muscle cells lack glucose-6-phosphatase, which is required to pass glucose into the blood, the glycogen they store is available solely for internal use and is not shared with other cells. This is in contrast to liver cells, which, on demand, readily do break down their stored glycogen into glucose and send it through the blood stream as fuel for other organs.
Skeletal muscle needs ATP (provides energy) for muscle contraction and relaxation, in what is known as the sliding filament theory. Skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity. During anaerobic activity, such as weightlifting and isometric exercise, the phosphagen system (ATP-PCr) and muscle glycogen are the only substrates used as they do not require oxygen nor blood flow.
Different bioenergetic systems produce ATP at different speeds, with ATP produced from muscle glycogen being much faster than fatty acid oxidation. The level of exercise intensity determines how much of which substrate (fuel) is used for ATP synthesis also. Muscle glycogen can supply a much higher rate of substrate for ATP synthesis than blood glucose. During maximum intensity exercise, muscle glycogen can supply 40 mmol glucose/kg wet weight/minute, whereas blood glucose can supply 4 - 5 mmol. Due to its high supply rate and quick ATP synthesis, during high-intensity aerobic activity (such as brisk walking, jogging, or running), the higher the exercise intensity, the more the muscle cell produces ATP from muscle glycogen. This reliance on muscle glycogen is not only to provide the muscle with enough ATP during high-intensity exercise, but also to maintain blood glucose homeostasis (that is, to not become hypoglycaemic by the muscles needing to extract far more glucose from the blood than the liver can provide). A deficit of muscle glycogen leads to muscle fatigue known as "hitting the wall" or "the bonk" (see below under glycogen depletion).
Structure Type
In 1999, Meléndez et al claimed that the structure of glycogen is optimal under a particular metabolic constraint model, where the structure was suggested to be "fractal" in nature. However, research by Besford et al used small angle X-ray scattering experiments accompanied by branching theory models to show that glycogen is a randomly hyperbranched polymer nanoparticle. Glycogen is not fractal in nature. This has been subsequently verified by others who have performed Monte Carlo simulations of glycogen particle growth, and shown that the molecular density reaches a maximum near the centre of the nanoparticle structure, not at the periphery (contradicting a fractal structure that would have greater density at the periphery).
History
Glycogen was discovered by Claude Bernard. His experiments showed that the liver contained a substance that could give rise to reducing sugar by the action of a "ferment" in the liver. By 1857, he described the isolation of a substance he called "la matière glycogène", or "sugar-forming substance". Soon after the discovery of glycogen in the liver, M.A. Sanson found that muscular tissue also contains glycogen. The empirical formula for glycogen of ()n was established by August Kekulé in 1858.
Sanson, M. A. "Note sur la formation physiologique du sucre dans l’economie animale." Comptes rendus des seances de l’Academie des Sciences 44 (1857): 1323-5.
Metabolism
Synthesis
Glycogen synthesis is, unlike its breakdown, endergonic—it requires the input of energy. Energy for glycogen synthesis comes from uridine triphosphate (UTP), which reacts with glucose-1-phosphate, forming UDP-glucose, in a reaction catalysed by UTP—glucose-1-phosphate uridylyltransferase. Glycogen is synthesized from monomers of UDP-glucose initially by the protein glycogenin, which has two tyrosine anchors for the reducing end of glycogen, since glycogenin is a homodimer. After about eight glucose molecules have been added to a tyrosine residue, the enzyme glycogen synthase progressively lengthens the glycogen chain using UDP-glucose, adding α(1→4)-bonded glucose to the nonreducing end of the glycogen chain.
The glycogen branching enzyme catalyzes the transfer of a terminal fragment of six or seven glucose residues from a nonreducing end to the C-6 hydroxyl group of a glucose residue deeper into the interior of the glycogen molecule. The branching enzyme can act upon only a branch having at least 11 residues, and the enzyme may transfer to the same glucose chain or adjacent glucose chains.
Breakdown
Glycogen is cleaved from the nonreducing ends of the chain by the enzyme glycogen phosphorylase to produce monomers of glucose-1-phosphate:
In vivo, phosphorolysis proceeds in the direction of glycogen breakdown because the ratio of phosphate and glucose-1-phosphate is usually greater than 100. Glucose-1-phosphate is then converted to glucose 6 phosphate (G6P) by phosphoglucomutase. A special debranching enzyme is needed to remove the α(1→6) branches in branched glycogen and reshape the chain into a linear polymer. The G6P monomers produced have three possible fates:
G6P can continue on the glycolysis pathway and be used as fuel.
G6P can enter the pentose phosphate pathway via the enzyme glucose-6-phosphate dehydrogenase to produce NADPH and 5 carbon sugars.
In the liver and kidney, G6P can be dephosphorylated back to glucose by the enzyme glucose 6-phosphatase. This is the final step in the gluconeogenesis pathway.
Clinical relevance
Disorders of glycogen metabolism
The most common disease in which glycogen metabolism becomes abnormal is diabetes, in which, because of abnormal amounts of insulin, liver glycogen can be abnormally accumulated or depleted. Restoration of normal glucose metabolism usually normalizes glycogen metabolism, as well.
In hypoglycemia caused by excessive insulin, liver glycogen levels are high, but the high insulin levels prevent the glycogenolysis necessary to maintain normal blood sugar levels. Glucagon is a common treatment for this type of hypoglycemia.
Various inborn errors of carbohydrate metabolism are caused by deficiencies of enzymes or transport proteins necessary for glycogen synthesis or breakdown. These are collectively referred to as glycogen storage diseases.
Glycogen depletion and endurance exercise
Long-distance athletes, such as marathon runners, cross-country skiers, and cyclists, often experience glycogen depletion, where almost all of the athlete's glycogen stores are depleted after long periods of exertion without sufficient carbohydrate consumption. This phenomenon is referred to as "hitting the wall" in running and "bonking" in cycling.
Glycogen depletion can be forestalled in three possible ways:
First, during exercise, carbohydrates with the highest possible rate of conversion to blood glucose (high glycemic index) are ingested continuously. The best possible outcome of this strategy replaces about 35% of glucose consumed at heart rates above about 80% of maximum.
Second, through endurance training adaptations and specialized regimens (e.g. fasting, low-intensity endurance training), the body can condition type I muscle fibers to improve both fuel use efficiency and workload capacity to increase the percentage of fatty acids used as fuel, sparing carbohydrate use from all sources.
Third, by consuming large quantities of carbohydrates after depleting glycogen stores as a result of exercise or diet, the body can increase storage capacity of intramuscular glycogen stores. This process is known as carbohydrate loading. In general, glycemic index of carbohydrate source does not matter since muscular insulin sensitivity is increased as a result of temporary glycogen depletion.
When athletes ingest both carbohydrate and caffeine following exhaustive exercise, their glycogen stores tend to be replenished more rapidly; however, the minimum dose of caffeine at which there is a clinically significant effect on glycogen repletion has not been established.
Nanomedicine
Glycogen nanoparticles have been investigated as potential drug delivery systems.
| Biology and health sciences | Biochemistry and molecular biology | null |
169060 | https://en.wikipedia.org/wiki/Dinofelis | Dinofelis | Dinofelis is an extinct genus of machairodontine (sabre-toothed cat), usually classified in the tribe Metailurini. It was widespread in Europe, Asia, Africa and North America from 5 million to about 1.2 million years ago (early Pliocene to early Pleistocene). Fossils very similar to Dinofelis from Lothagam range back to around 8 million years ago, in the Late Miocene.
Discovery and naming
The genus Dinofelis was originally named by Otto Zdansky in 1924 for the species Dinofelis abeli.
Further fossil species were named, including Felis diastemata and Meganthereon barlowi, which were later transferred to the genus Therailurus, which was in turn later considered a junior synonym of Dinofelis.
A comprehensive review of the genus was published in 2001 by paleontologists Lars Werdelin and Margaret E. Lewis, including mention of the then-unnamed Langebaanweg and Lothagam species, as well as naming a new species Dinofelis aronoki; the species epithet came from the phrase arono ki which, in the language of the people of eastern Turkana, means "it was terrible". The review also noted six different sets of remains that were referable to Dinofelis but were too fragmentary to assign to any one species.
Another unnamed (again due to fragmentary material) species was described in 2021 based on fossils from a Plio-Pleistocene site in Northern Africa.
In 2023 the Langebaanweg species was described as Dinofelis werdelini; the specific epithet honored Lars Werdelin. It assigned a holotype, paratype, and nine other specimens of fragmentary cranial material (some of which had previously been referred to other species) to the newly-named species.
Description
This genus varied in size, with a similar range of sizes to Panthera. In one study, the body mass of Dinofelis was estimated at , though species with similar dimensions to large lynx or small pumas also existed.
The canine teeth of Dinofelis are longer and more flattened than those of modern cats but less than those of other saber-tooths. While the lower canines are robust, the cheek teeth are not as robust as those of most modern big cats.
Dinofelis werdelini was a medium-sized machairodontine, about the size of a large jaguar, with robust upper canines and relatively small cheek teeth.
Classification
The phylogenetic status of Dinofelis within Machairodontinae has been difficult to ascertain historically, and various positions within Felidae have been proposed for the genus. It has commonly been recovered as belonging to the tribe Metailurini, although one recent analysis contested the monophyly of Metailurini, placing Dinofelis as a sister taxon to Rhizosmilodon.
A number of species are currently accepted in the genus:
Dinofelis aronoki: It lived in the Villafranchian and Biharian stage in Kenya and Ethiopia. Recently split from D. barlowi, it is the largest known species of Dinofelis.
Dinofelis barlowi: It lived from the late Pliocene to the early Pleistocene. Geographically, found in Europe, North America and Asia but mainly in Africa. It was 70 cm high, probably the smallest species of Dinofelis.
Dinofelis cristata: Known from China, this species is especially convergent with the genus Panthera in its skull and particularly canine morphology, suggesting more pantherine-like hunting behaviour than other machairodonts. (Includes D. abeli.)
Dinofelis darti: It lived in South Africa during the Villafranchian stage.
Dinofelis diastemata: This species is known from the early Pliocene of Europe.
Dinofelis paleoonca: Its type locality is Meade's Quarry 11, which is in a Blancan terrestrial horizon in the Blanco Formation of Texas. It was recombined as Dinofelis palaeoonca by Kurten (1972), Hemmer (1973), Dalquest (1975), Kurten and Anderson (1980), Schultz (1990) and Werdelin and Lewis (2001).
Dinofelis petteri: Known from the Pliocene of East Africa
Dinofelis piveteaui: The latest known species of Dinofelis, lived in South Africa during the early Pleistocene. This species has the most pronounced sabertoothed adaptations of the genus.
Dinofelis werdelini: Known from Langebaanweg in Africa, from the Pliocene.
Additional fossils from Lothagam (specifically the Nawata Formation and the Apak Member of the Nachukui Formation) are considered to represent another, unnamed species; one smaller and more primitive than other known species.
A major review of Dinofelis by Werdelin et al. in 2001 produced a cladogram of its species:
The 2023 paper that named D. werdelini found it to be a sister to a clade formed by D. cristatus, D. petteri, D. piveteaui, and D. barlowi, but did not test it against the rest of the genus.
Dinofelis position amongst the Metailurini per a 2018 phylogenetic analysis:
Paleobiology
Morphometric analysis of Dinofelis specimens from Olduvai Gorge suggests that the felid was best suited for mixed habitats rather than open grasslands or closed woodlands.
Analysis of carbon isotope ratios in specimens from Swartkrans indicates that Dinofelis preferentially hunted grazing animals. The main predators of hominids in the environment at that time were most likely leopards and fellow machairodont Megantereon, whose carbon isotope ratios showed more indication of preying on hominids.
Several sites from South Africa seem to show Dinofelis may have hunted and killed Australopithecus africanus, since the finds mingle fossilized remains of Dinofelis, hominids, and other large contemporary animals. In South Africa, Dinofelis remains have been found near Paranthropus fossil skulls, a few with precisely spaced canine holes in their crania, so it is possible Dinofelis preyed on robust hominids as well. This may have been rare, however, as carbon isotope ratios contradict this.
It is thought that the gradual disappearance of its forest environment may have contributed to Dinofelis extinction at the start of the ice age.
| Biology and health sciences | Other carnivora | Animals |
169071 | https://en.wikipedia.org/wiki/Smilodon | Smilodon | Smilodon is an extinct genus of felids. It is one of the best known saber-toothed predators and prehistoric mammals. Although commonly known as the saber-toothed tiger, it was not closely related to the tiger or other modern cats, belonging to the extinct subfamily Machairodontinae, with an estimated date of divergence from the ancestor of living cats around 20 million years ago. Smilodon was one of the last surviving machairodonts alongside Homotherium. Smilodon lived in the Americas during the Pleistocene epoch (2.5 mya – 10,000 years ago). The genus was named in 1842 based on fossils from Brazil; the generic name means "scalpel" or "two-edged knife" combined with "tooth". Three species are recognized today: S. gracilis, S. fatalis, and S. populator. The two latter species were probably descended from S. gracilis, which itself probably evolved from Megantereon. The hundreds of specimens obtained from the La Brea Tar Pits in Los Angeles constitute the largest collection of Smilodon fossils.
Overall, Smilodon was more robustly built than any extant cat, with particularly well-developed forelimbs and exceptionally long upper canine teeth. Its jaw had a bigger gape than that of modern cats, and its upper canines were slender and fragile, being adapted for precision killing. S. gracilis was the smallest species at in weight. S. fatalis had a weight of and height of . Both of these species are mainly known from North America, but remains from South America have also been attributed to them (primarily from the northwest of the continent). S. populator from South America was the largest species, at in weight and in height, and was among the largest known felids. The coat pattern of Smilodon is unknown, but it has been artistically restored with plain or spotted patterns.
In North America, Smilodon hunted large herbivores such as bison and camels, and it remained successful even when encountering new prey taxa in South America such as Macrauchenia and ground sloths. Smilodon is thought to have killed its prey by holding it still with its forelimbs and biting it, but it is unclear in what manner the bite itself was delivered. Scientists debate whether Smilodon had a social or a solitary lifestyle; analysis of modern predator behavior as well as of Smilodons fossil remains could be construed to lend support to either view. Smilodon probably lived in closed habitats such as forests and bush, which would have provided cover for ambushing prey. Smilodon died out as part of the end-Pleistocene extinction event around 13-10,000 years ago, along with most other large animals across the Americas. Its reliance on large animals has been proposed as the cause of its extinction. Smilodon may have been impacted by habitat turnover and loss of prey it specialized on due to possible climatic impacts, the effects of recently arrived humans on prey populations, and other factors.
Taxonomy
During the 1830s, Danish naturalist Peter Wilhelm Lund and his assistants collected fossils in the calcareous caves near the small town of Lagoa Santa, Minas Gerais, Brazil. Among the thousands of fossils found, he recognized a few isolated cheek teeth as belonging to a hyena, which he named Hyaena neogaea in 1839. After more material was found (including incisor teeth and foot bones), Lund concluded the fossils instead belonged to a distinct genus of felids, though transitional to the hyenas. He stated it would have matched the largest modern predators in size, and was more robust than any modern cat. Lund originally wanted to call the new genus Hyaenodon, but realizing this name had recently been applied to another prehistoric predator, he instead named it Smilodon populator in 1842. He explained the Ancient Greek meaning of Smilodon as (smilē), "scalpel" or "two-edged knife", and οδόντος (odóntos), "tooth". This has also been translated as "tooth shaped like double-edged knife". He explained the species name populator as "the destroyer", which has also been translated as "he who brings devastation". Lund based the name on the shape of the incisors, and the large canine teeth were not known until 1846. By 1846, Lund had acquired nearly every part of the skeleton (from different individuals), and more specimens were found in neighboring countries by other collectors in the following years. Though some later authors used Lund's original species name neogaea instead of populator, it is now considered an invalid nomen nudum, as it was not accompanied with a proper description and no type specimens were designated. Some South American specimens have been referred to other genera, subgenera, species, and subspecies, such as Smilodontidion riggii, Smilodon (Prosmilodon) ensenadensis, and S. bonaeriensis, but these are now thought to be junior synonyms of S. populator.
Fossils of Smilodon were discovered in North America from the second half of the 19th century onwards. In 1869, American paleontologist Joseph Leidy described a maxilla fragment with a molar, which had been discovered in a petroleum bed in Hardin County, Texas. He referred the specimen to the genus Felis (which was then used for most cats, extant as well as extinct) but found it distinct enough to be part of its own subgenus, as F. (Trucifelis) fatalis. The species name means "deadly". In an 1880 article about extinct American cats, American paleontologist Edward Drinker Cope pointed out that the F. fatalis molar was identical to that of Smilodon, and he proposed the new combination S. fatalis. Most North American finds were scanty until excavations began in the La Brea Tar Pits in Los Angeles, where hundreds of individuals of S. fatalis have been found since 1875. S. fatalis has junior synonyms such as S. mercerii, S. floridanus, and S. californicus. American paleontologist Annalisa Berta considered the holotype of S. fatalis too incomplete to be an adequate type specimen, and the species has at times been proposed to be a junior synonym of S. populator. Nordic paleontologists Björn Kurtén and Lars Werdelin supported the distinctness of the two species in an article published in 1990. A 2018 article by the American paleontologist John P. Babiarz and colleagues concluded that S. californicus, represented by the specimens from the La Brea Tar Pits, was a distinct species from S. fatalis after all and that more research is needed to clarify the taxonomy of the lineage.
In his 1880 article about extinct cats, Cope also named a third species of Smilodon, S. gracilis. The species was based on a partial canine, which had been obtained in the Port Kennedy Cave near the Schuylkill River in Pennsylvania. Cope found the canine to be distinct from that of the other Smilodon species due to its smaller size and more compressed base. Its specific name refers to the species' lighter build. This species is known from fewer and less complete remains than the other members of the genus. S. gracilis has at times been considered part of genera such as Megantereon and Ischyrosmilus. S. populator, S. fatalis and S. gracilis are currently considered the only valid species of Smilodon, and features used to define most of their junior synonyms have been dismissed as variation between individuals of the same species (intraspecific variation). One of the most famous of prehistoric mammals, Smilodon has often been featured in popular media and is the state fossil of California.
Evolution
Long the most completely known saber-toothed cat, Smilodon is still one of the best-known members of the group, to the point where the two concepts have been confused. The term "saber-tooth" itself refers to an ecomorph consisting of various groups of extinct predatory synapsids (mammals and close relatives), which convergently evolved extremely long maxillary canines, as well as adaptations to the skull and skeleton related to their use. This includes members of Gorgonopsia, Thylacosmilidae, Machaeroidinae, Nimravidae, Barbourofelidae, and Machairodontinae. Within the family Felidae (true cats), members of the subfamily Machairodontinae are referred to as saber-toothed cats, and this group is itself divided into three tribes: Metailurini (false saber-tooths); Homotherini (scimitar-toothed cats); and Smilodontini (dirk-toothed cats), to which Smilodon belongs.
Members of Smilodontini are defined by their long slender canines with fine to no serrations, whereas Homotherini are typified by shorter, broad, and more flattened canines, with coarser serrations. Members of Metailurini were less specialized and had shorter, less flattened canines, and are not recognized as members of Machairodontinae by some researchers.
Despite the colloquial name "saber-toothed tiger", Smilodon is not closely related to the modern tiger (which belongs in the subfamily Pantherinae), or any other extant felid. A 1992 ancient DNA analysis suggested that Smilodon should be grouped with modern cats (subfamilies Felinae and Pantherinae). A 2005 study found that Smilodon belonged to a separate lineage. A study published in 2006 confirmed this, showing that the Machairodontinae diverged early from the ancestors of living cats and were not closely related to any living species. The ancestors of living cats and Machairodontinae estimated to have diverged around 20 million years ago. The following cladogram based on fossils and DNA analysis shows the placement of Smilodon among extinct and extant felids, after Rincón and colleagues, 2011:
The earliest felids are known from the Oligocene of Europe, such as Proailurus, and the earliest one with saber-tooth features is the Miocene genus Pseudaelurus. The skull and mandible morphology of the earliest saber-toothed cats was similar to that of the modern clouded leopards (Neofelis). The lineage further adapted to the precision killing of large animals by developing elongated canine teeth and wider gapes, in the process sacrificing high bite force. As their canines became longer, the bodies of the cats became more robust for immobilizing prey. In derived smilodontins and homotherins, the lumbar region of the spine and the tail became shortened, as did the hind limbs. Machairodonts once represented a dominant group of felids distributed across Africa, Eurasia and the North America during the Miocene and Pliocene epochs, but progressively declined over the course of the Pleistocene, by the Late Pleistocene, only two genera of machairodonts remained, Smilodon, and the distantly related Homotherium, both largely confined to the Americas. Based on mitochondrial DNA sequences extracted from ancient bones, the lineages of Homotherium and Smilodon are estimated to have diverged about 18 million years ago.
The earliest species of Smilodon is S. gracilis, which existed from 2.5 million to 500,000 years ago (early Blancan to Irvingtonian ages) and was the successor in North America of Megantereon, from which it probably evolved. Megantereon itself had entered North America from Eurasia during the Pliocene, along with Homotherium. S. gracilis reached the northern regions of South America in the Early Pleistocene as part of the Great American Interchange. S. fatalis existed 1.6 million–10,000 years ago (late Irvingtonian to Rancholabrean ages), and replaced S. gracilis in North America. S. populator existed 1 million–10,000 years ago (Ensenadan to Lujanian ages); it occurred in the eastern parts of South America.
Description
Skeleton
Smilodon was around the size of modern big cats, but was more robustly built. It had a reduced lumbar region, high scapula, short tail, and broad limbs with relatively short feet. Smilodon is most famous for its relatively long canine teeth, which are the longest found in the saber-toothed cats, at about long in the largest species, S. populator. The canines were slender and had fine serrations on the front and back side. The skull was robustly proportioned and the muzzle was short and broad. The cheek bones (zygomata) were deep and widely arched, the sagittal crest was prominent, and the frontal region was slightly convex. The mandible had a flange on each side of the front. The upper incisors were large, sharp, and slanted forwards. There was a diastema (gap) between the incisors and molars of the mandible. The lower incisors were broad, recurved, and placed in a straight line across. The p3 premolar tooth of the mandible was present in most early specimens, but lost in later specimens; it was only present in 6% of the La Brea sample. There is some dispute over whether Smilodon was sexually dimorphic. Some studies of S. fatalis fossils have found little difference between the sexes. Conversely, a 2012 study found that, while fossils of S. fatalis show less variation in size among individuals than modern Panthera, they do appear to show the same difference between the sexes in some traits.
S. gracilis was the smallest species, estimated at in weight, about the size of a jaguar. It was similar to its predecessor Megantereon of the same size, but its dentition and skull were more advanced, approaching S. fatalis. S. fatalis was intermediate in size between S. gracilis and S. populator. It ranged from . and reached a shoulder height of and body length of . It was similar to a lion in dimensions, but was more robust and muscular, and therefore had a larger body mass. Its skull was also similar to that of Megantereon, though more massive and with larger canines. S. populator was among the largest known felids, with a body mass range from to over , and one estimate suggesting up to . A particularly large S. populator skull from Uruguay measuring in length indicates this individual may have weighed as much as . It stood at a shoulder height of . Compared to S. fatalis, S. populator was more robust and had a more elongated and narrow skull with a straighter upper profile, higher positioned nasal bones, a more vertical occiput, more massive metapodials and slightly longer forelimbs relative to hindlimbs. Large fossil tracks from Argentina (for which the ichnotaxon name Smilodonichium has been proposed) have been attributed to S. populator, and measure by . This is larger than tracks of the Bengal tiger, to which the footprints have been compared.
External features
Smilodon and other saber-toothed cats have been reconstructed with both plain-colored coats and with spotted patterns (which appears to be the ancestral condition for feliforms), both of which are considered possible. Studies of modern cat species have found that species that live in the open tend to have uniform coats while those that live in more vegetated habitats have more markings, with some exceptions. Some coat features, such as the manes of male lions or the stripes of the tiger, are too unusual to predict from fossils.
Traditionally, saber-toothed cats have been artistically restored with external features similar to those of extant felids, by artists such as Charles R. Knight in collaboration with various paleontologists in the early 20th century. In 1969, paleontologist G. J. Miller instead proposed that Smilodon would have looked very different from a typical cat and similar to a bulldog, with a lower lip line (to allow its mouth to open wide without tearing the facial tissues), a more retracted nose and lower-placed ears. Paleoartist Mauricio Antón and coauthors disputed this in 1998 and maintained that the facial features of Smilodon were overall not very different from those of other cats. Antón noted that modern animals like the hippopotamus are able to open their mouths extremely wide without tearing tissue due to a folded orbicularis oris muscle, and such a muscle arrangement exists in modern large felids. Antón stated that extant phylogenetic bracketing (where the features of the closest extant relatives of a fossil taxon are used as reference) is the most reliable way of restoring the life-appearance of prehistoric animals, and the cat-like Smilodon restorations by Knight are therefore still accurate. A 2022 study by Antón and colleagues concluded that the upper canines of Smilodon would have been visible when the mouth was closed, while those of Homotherium would have not, after examining fossils and extant big cats.
Paleobiology
Diet
An apex predator, Smilodon primarily hunted large mammals. Isotopes preserved in the bones of S. fatalis in the La Brea Tar Pits reveal that ruminants like bison (Bison antiquus, which was much larger than the modern American bison) and camels (Camelops) were most commonly taken by the cats there.
Smilodon fatalis may have also occasionally preyed upon Glyptotherium, based on a skull from a juvenile Glyptotherium texanum recovered from Pleistocene deposits in Arizona that bear the distinctive elliptical puncture marks best matching those of Smilodon, indicating that the predator successfully bit into the skull through the glyptodont's armored cephalic shield. In addition, isotopes preserved in the tooth enamel of S. gracilis specimens from Florida show that this species fed on the peccary Platygonus and the llama-like Hemiauchenia. Stable carbon isotope measurements of S. gracilis remains in Florida varied significantly between different sites and show that the species was flexible in its feeding habits. Isotopic studies of dire wolf (Aenocyon dirus) and American lion (Panthera atrox) bones show an overlap with S. fatalis in prey, which suggests that they were competitors. More detailed isotope analysis however, indicates that Smilodon fatalis preferred forest-dwelling prey such as tapirs, deer and forest-dwelling bison as opposed to the dire wolves' preferences for prey inhabiting open areas such as grassland. The availability of prey in the Rancho La Brea area was likely comparable to modern East Africa.
As Smilodon migrated to South America, its diet changed; bison were absent, the horses and proboscideans were different, and native ungulates such as toxodonts and litopterns were completely unfamiliar, yet S. populator thrived as well there as its relatives in North America. Isotopic analysis for S. populator suggests that its main prey species included the camel like litoptern ungulate Macrauchenia, the rhinoceros-like ungulate Toxodon platensis, the large armadillo relatives Pachyarmatherium, Holmesina, species of the glyptodont genus Panochthus, the llama Palaeolama, the ground sloth Catonyx, and the equine Equus neogeus, and the crocodilian Caiman latirostris. This analysis of its diet also indicates that S. populator hunted both in open and forested habitats. The differences between the North and South American species may be due to the difference in prey between the two continents. Smilodon may have avoided eating bone and would have left enough food for scavengers. Coprolites assigned to S. populator recovered from Argentina preserve osteoderms from the ground sloth Mylodon and a Lama scaphoid bone. In addition to this unambiguous evidence of bone consumption, the coprolites suggest that Smilodon had a more generalist diet than previously thought. Examinations of dental microwear from La Brea further suggests that Smilodon consumed both flesh and bone. Smilodon itself may have scavenged dire wolf kills. It has been suggested that Smilodon was a pure scavenger that used its canines for display to assert dominance over carcasses, but this theory is not supported today as no modern terrestrial mammals are pure scavengers.
Predatory behavior
The brain of Smilodon had sulcal patterns similar to modern cats, which suggests an increased complexity of the regions that control the sense of hearing, sight, and coordination of the limbs. Felid saber-tooths in general had relatively small eyes that were not as forward-facing as those of modern cats, which have good binocular vision to help them move in trees. Smilodon was likely an ambush predator that concealed itself in dense vegetation, as its limb proportions were similar to modern forest-dwelling cats, and its short tail would not have helped it balance while running. Unlike its ancestor Megantereon, which was at least partially scansorial and therefore able to climb trees, Smilodon was probably completely terrestrial due to its greater weight and lack of climbing adaptations. Tracks from Argentina named Felipeda miramarensis in 2019 may have been produced by Smilodon. If correctly identified, the tracks indicate that the animal had fully retractible claws, plantigrade feet, lacked strong supination capabilities in its paws, notably robust forelimbs compared to the hindlimbs, and was probably an ambush predator.
The heel bone of Smilodon was fairly long, which suggests it was a good jumper. Its well-developed flexor and extensor muscles in its forearms probably enabled it to pull down, and securely hold down, large prey. Analysis of the cross-sections of S. fatalis humeri indicated that they were strengthened by cortical thickening to such an extent that they would have been able to sustain greater loading than those of extant big cats, or of the extinct American lion. The humerus cortical wall in S. fatalis was a 15 % thicker than excpected in modern big cats of similar size. The thickening of S. fatalis femurs was within the range of extant felids. Its canines were fragile by the sides due to their flattened shape and could not have bitten into bone; due to the risk of breaking, these cats had to subdue and restrain their prey with their powerful forelimbs before they could use their canine teeth, and likely used quick slashing or stabbing bites rather than the slow, suffocating bites typically used by modern cats. On rare occasions, as evidenced by fossils, Smilodon was willing to risk biting into bone with its canines. This may have been focused more towards competition such as other Smilodon or potential threats such as other carnivores than on prey. The bending force applied from the back to front of a S. fatalis upper canine required to break it, has been estimated to be of 7000 Newtons, in comparison, in lions and tigers, two predators of similar size, a bending force of 8243 and 7440 Newtons, respectively, would be required.
Debate continues as to how Smilodon killed its prey. Traditionally, the most popular theory is that the cat delivered a deep stabbing bite or open-jawed stabbing thrust to the throat, killing the prey very quickly. Another hypothesis suggests that Smilodon targeted the belly of its prey. This is disputed, as the curvature of their prey's belly would likely have prevented the cat from getting a good bite or stab. In regard to how Smilodon delivered its bite, the "canine shear-bite" hypothesis has been favored, where flexion of the neck and rotation of the skull assisted in biting the prey, but this may be mechanically impossible. However, evidence from comparisons with Homotherium suggest that Smilodon was fully capable of and utilized the canine shear-bite as its primary means of killing prey, based on the fact that it had a thick skull and relatively little trabecular bone, while Homotherium had both more trabecular bone and a more lion-like clamping bite as its primary means of attacking prey. The discovery, made by Figueirido and Lautenschlager et al., published in 2018 suggests extremely different ecological adaptations in both machairodonts. The mandibular flanges may have helped resist bending forces when the mandible was pulled against the hide of a prey animal. It has been experimentally proven by means of a machine that recreates the teeth, and simulates the movements of jaws and neck of Smilodon fatalis (The "Robocat") on bison and elk carcasses, that the stabbing bite to the throat is a much more plausible and practical killing technique than the stabbing bite to the belly.
The protruding incisors were arranged in an arch, and were used to hold the prey still and stabilize it while the canine bite was delivered. The contact surface between the canine crown and the gum was enlarged, which helped stabilize the tooth and helped the cat sense when the tooth had penetrated to its maximum extent. Since saber-toothed cats generally had a relatively large infraorbital foramen (opening) in the skull, which housed nerves associated with the whiskers, it has been suggested the improved senses would have helped the cats' precision when biting outside their field of vision, and thereby prevent breakage of the canines. The blade-like carnassial teeth were used to cut skin to access the meat, and the reduced molars suggest that they were less adapted for crushing bones than modern cats. As the food of modern cats enters the mouth through the side while cutting with the carnassials, not the front incisors between the canines, the animals do not need to gape widely, so the canines of Smilodon would likewise not have been a hindrance when feeding. A study published in 2022 of how machairodonts fed revealed that wear patterns on the teeth of S. fatalis also suggest that it was capable of eating bone to a similar extent as lions. This and comparisons with bite marks left by the contemporary machairodont Xenosmilus suggest that Smilodon and its relatives could efficiently de-flesh a carcass of meat when feeding without being hindered by their long canines.
Despite being more powerfully built than other large cats, Smilodon had a weaker bite. Modern big cats have more pronounced zygomatic arches, while these were smaller in Smilodon, which restricted the thickness and therefore power of the temporalis muscles and thus reduced Smilodons bite force. Analysis of its narrow jaws indicates that it could produce a bite only a third as strong as that of a lion (the bite force quotient measured for the lion is 112). There seems to be a general rule that the saber-toothed cats with the largest canines had proportionally weaker bites. Analyses of canine bending strength (the ability of the canine teeth to resist bending forces without breaking) and bite forces indicate that the saber-toothed cats' teeth were stronger relative to the bite force than those of modern big cats. In addition, Smilodon gape could have reached over 110 degrees, while that of the modern lion reaches 65 degrees. This made the gape wide enough to allow Smilodon to grasp large prey despite the long canines. A 2018 study compared the killing behavior of Smilodon fatalis and Homotherium serum, and found that the former had a strong skull with little trabecular bone for a stabbing canine-shear bite, whereas the latter had more trabecular bone and used a clamp and hold style more similar to lions. The two would therefore have held distinct ecological niches.
By finding of correlation between relative cribriform plate size and repertoire of functional olfactory receptor genes, it was found that S. fatalis had a slightly smaller repertoire than modern felids with 600 olfatory receptor genes, compared to 677 of a domestic cat. This indicates that S. fatalis used less olfaction for its daily activities than modern felids.
Natural traps
Many Smilodon specimens have been excavated from asphalt seeps that acted as natural carnivore traps. Animals were accidentally trapped in the seeps and became bait for predators that came to scavenge, but these were then trapped themselves. The best-known of such traps are at La Brea in Los Angeles, which have produced over 166,000 Smilodon fatalis specimens that form the largest collection in the world. The sediments of the pits there were accumulated 40,000 to 10,000 years ago, in the Late Pleistocene. Though the trapped animals were buried quickly, predators often managed to remove limb bones from them, but they were themselves often trapped and then scavenged by other predators; 90% of the excavated bones belonged to predators.
The Talara Tar Seeps in Peru represent a similar scenario, and have also produced fossils of Smilodon. Unlike in La Brea, many of the bones were broken or show signs of weathering. This may have been because the layers were shallower, so the thrashing of trapped animals damaged the bones of previously trapped animals. Many of the carnivores at Talara were juveniles, possibly indicating that inexperienced and less fit animals had a greater chance of being trapped. Though Lund thought accumulations of Smilodon and herbivore fossils in the Lagoa Santa Caves were due to the cats using the caves as dens, these are probably the result of animals dying on the surface, and water currents subsequently dragging their bones to the floor of the cave, but some individuals may also have died after becoming lost in the caves.
Social life
Scientists debate whether Smilodon was social. One study of African predators found that social predators like lions and spotted hyenas respond more to the distress calls of prey than solitary species. Since S. fatalis fossils are common at the La Brea Tar Pits, and were likely attracted by the distress calls of stuck prey, this could mean that this species was social as well. One critical study claims that the study neglects other factors, such as body mass (heavier animals are more likely to get stuck than lighter ones), intelligence (some social animals, like the American lion, may have avoided the tar because they were better able to recognize the hazard), lack of visual and olfactory lures, the type of audio lure, and the length of the distress calls (the actual distress calls of the trapped prey animals would have lasted longer than the calls used in the study). The author of that study ponders what predators would have responded if the recordings were played in India, where the otherwise solitary tigers are known to aggregate around a single carcass. The authors of the original study responded that though effects of the calls in the tar pits and the playback experiments would not be identical, this would not be enough to overturn their conclusions. In addition, they stated that weight and intelligence would not likely affect the results as lighter carnivores are far more numerous than heavy herbivores and the social (and seemingly intelligent) dire wolf is also found in the pits.
Another argument for sociality is based on the healed injuries in several Smilodon fossils, which would suggest that the animals needed others to provide them food. This argument has been questioned, as cats can recover quickly from even severe bone damage and an injured Smilodon could survive if it had access to water. However, a Smilodon suffering hip dysplasia at a young age that survived to adulthood suggests that it could not have survived to adulthood without aid from a social group, as this individual was unable to hunt or defend its territory due to the severity of its congenital issue. The brain of Smilodon was relatively small compared to other cat species. Some researchers have argued that Smilodon brain would have been too small for it to have been a social animal. An analysis of brain size in living big cats found no correlation between brain size and sociality. Another argument against Smilodon being social is that being an ambush hunter in closed habitat would likely have made group-living unnecessary, as in most modern cats. Yet it has also been proposed that being the largest predator in an environment comparable to the savanna of Africa, Smilodon may have had a social structure similar to modern lions, which possibly live in groups primarily to defend optimal territory from other lions (lions are the only social big cats today).
Whether Smilodon was sexually dimorphic has implications for its reproductive behavior. Based on their conclusions that Smilodon fatalis had no sexual dimorphism, Van Valkenburgh and Sacco suggested in 2002 that, if the cats were social, they would likely have lived in monogamous pairs (along with offspring) with no intense competition among males for females. Likewise, Meachen-Samuels and Binder concluded in 2010 that aggression between males was less pronounced in S. fatalis than in the American lion. Christiansen and Harris found in 2012 that, as S. fatalis did exhibit some sexual dimorphism, there would have been evolutionary selection for competition between males. Some bones show evidence of having been bitten by other Smilodon, possibly the result of territorial battles, competition for breeding rights or over prey. Two S. populator skulls from Argentina show seemingly fatal, unhealed wounds which appear to have been caused by the canines of another Smilodon (though it cannot be ruled out they were caused by kicking prey). If caused by intraspecific fighting, it may also indicate that they had social behavior which could lead to death, as seen in some modern felines (as well as indicating that the canines could penetrate bone). It has been suggested that the exaggerated canines of saber-toothed cats evolved for sexual display and competition, but a statistical study of the correlation between canine and body size in S. populator found no difference in scaling between body and canine size concluded it was more likely they evolved solely for a predatory function.
A set of three associated skeletons of S. fatalis found in Ecuador and described in 2021 by Reynolds, Seymour, and Evans suggests that there was prolonged parental care in Smilodon. The two subadult individuals uncovered share a unique inherited trait in their dentaries, suggesting they were siblings; a rare instance of familial relationships being found in the fossil record. The subadult specimens are also hypothesized to have been male and female, respectively, while the adult skeletal remains found at the site are believed to have belonged to their mother. The subadults were estimated to have been around two years of age at the time of their deaths, but were still growing. S. fatalis had proportionally larger hyoid bones than modern felid species and thus likely produced deeper vocalizations. While Smilodon had the same number of hyoid bones as the "roaring" cats, their shape was closer to that of "purring" species.
Development
Smilodon started developing its adult saber-teeth when the animal reached between 12 and 19 months of age, shortly after the completion of the eruption of the cat's baby teeth. Both baby and adult canines would be present side by side in the mouth for an approximately 11-month period, and the muscles used in making the powerful bite were developed at about one-and-a-half years old as well, eight months earlier than in a modern lion. After Smilodon reached 23 to 30 months of age, the infant teeth were shed while the adult canines grew at an average growth rate of per month during a 12-month period. They reached their full size at around 3 years of age, later than modern species of big cats. Juvenile and adolescent Smilodon specimens are extremely rare at Rancho La Brea, where the study was performed, indicating that they remained hidden or at denning sites during hunts, and depended on parental care while their canines were developing.
A 2024 study found evidence that adolescent Smilodon kept their milk sabers for extended periods (estimated at 30 months) to help reinforce their adult canines as they grew in. As a result, the milk sabers acted as a structural support, allowing them to begin hunting with minimized risk to their mature set of sabers. As a result, the retention of the cat's milk sabers lessened the bending strain on the cat's emerging adult teeth as it bit down, as it was discovered the erupting sabers were much more vulnerable to breakage as they grew in than when matured. This would have also resulted in Smilodon being "double-fanged" during this growth stage, as corroborated by the discovery of individuals at this ontogenic stage at Rancho La Brea.
A 2017 study indicates that juveniles were born with a robust build similar to the adults. Comparison of the bones of juvenile S. fatalis specimens from La Brea with those of the contemporaneous American lion revealed that the two cats shared a similar growth curve. Felid forelimb development during ontogeny (changes during growth) has remained tightly constrained. The curve is similar to that for modern cats such as tigers and cougars, but shifts more towards the robust direction of the axes than is seen in modern felids. Examinations by Reynolds, Seymour, and Evans (2021) suggest that Smilodon had a unique and fast growth rate similar to a tiger, but that there was a prolonged period of growth in the genus similar to what is seen in lions, and that the cubs were reliant on their parents until this growth period ended.
Paleopathology
Several Smilodon fossils show signs of ankylosing spondylitis, hyperostosis and trauma. One study of 1,000 Smilodon skulls found that 36% of them had eroded parietal bones, which is where the largest jaw muscles attach. They also showed signs of microfractures, and the weakening and thinning of bones possibly caused by mechanical stress from the constant need to make stabbing motions with the canines. Bony growths where the deltoid muscle inserted in the humerus is a common pathology for a La Brea specimen, which was probably due to repeated strain when Smilodon attempted to pull down prey with its forelimbs. Sternum injuries are also common, probably due to collision with prey.
The frequency of trauma in S. fatalis specimens was 4.3%, compared to 2.8% in the dire wolf, which implies the ambush predatory behavior of the former led to greater risk of injury than the pursuit predatory behavior of the latter. Smilodon remains exhibit relatively more shoulder and lumbar vertebrae injuries. A 2023 study documented a high degree of subchondral defects in limb-joint surfaces of S. fatalis and dire wolf specimens from the La Brea Tar pits that resembled osteochondrosis dissecans. As modern dogs with this disease are inbred, the researchers suggested this would have been the case for the prehistoric species as well as they approached extinction, but cautioned that more research was needed to determine if this was also the case in specimens from other parts of the Americas.
Osteomyelitis in the left fourth metacarpal bone has been reported in a S. populator specimen dating back to Marine Isotope Stage 5. This pathology resulted in the machairodont individual becoming incapable of flexing its toe and would have severely diminished its ability to hunt prey.
Distribution and habitat
Smilodon lived during the Pleistocene epoch (2.5 mya–10,000 years ago), and was perhaps the most recent of the saber-toothed cats. S. fatalis lived in a variety of habitats, being able to inhabit open grassland and parkland, marginal woodland-grassland settings, and closed forests. Fossils of the genus have been found throughout the Americas. The northernmost remains of the genus are S. fatalis fossils from Alberta, Canada, with the southernmost remains of S. populator being known from the far south of Patagonia, near the Strait of Magellan. The habitat of North America varied from subtropical forests and savannah in the south, to treeless mammoth steppes in the north. The mosaic vegetation of woods, shrubs, and grasses in southwestern North America supported large herbivores such as horses, bison, antelope, deer, camels, mammoths, mastodons, and ground sloths. North America also supported other saber-toothed cats, such as Homotherium and Xenosmilus, as well as other large carnivores including dire wolves, short-faced bear (Arctodus simus) and the American lion. Competition from such carnivores may have prevented North American S. fatalis from attaining the size of South America's S. populator. The similarity in size of S. fatalis and the American lion suggests niche overlap and direct competition between these species, and they appear to have fed on similarly sized prey.
S. gracilis entered South America during the early to middle Pleistocene, where it probably gave rise to S. populator, which lived in the eastern part of the continent. S. fatalis also entered western South America in the late Pleistocene, and the two species were thought to be divided by the Andes mountains. However, in 2018, a skull of S. fatalis found in Uruguay east of the Andes was reported, which puts the idea that the two species were allopatric (geographically separated) into question. The American interchange resulted in a mix of native and invasive species sharing the prairies and woodlands in South America; North American herbivores included proboscideans, horses, camelids and deer, South American herbivores included toxodonts, litopterns, ground sloths, and glyptodonts. Native metatherian predators (including the saber-toothed thylacosmilids, which do not appear to have competed with Smilodon) had gone extinct by the Pliocene, and were replaced by North American carnivores such as canids, bears, and large cats.
S. populator was very successful, while Homotherium never became widespread in South America. The extinction of the thylacosmilids has been attributed to competition with Smilodon, but this is probably incorrect, as they seem to have disappeared before the arrival of the large cats. The phorusrhacid "terror birds" may have dominated the large predator niche in South America until Smilodon arrived. S. populator may have been able to reach larger size than S. fatalis due to a lack of competition in Pleistocene South America; S. populator arrived after the extinction of Arctotherium angustidens, one of the largest carnivores ever, and could therefore assume the niche of mega-carnivore. S. populator preferred large prey from open habitats such as grassland and plains, based on evidence gathered from isotope ratios that determined the animal's diet. In this way, the South American Smilodon species was probably similar to the modern lion. S. populator probably competed with the canid Protocyon there, but not with the jaguar, which fed primarily on smaller prey. On the other hand, morphometry points to S. populator being best adapted for more closed environments.
Extinction
Along with most of the New World Pleistocene megafauna, Smilodon became extinct by 10,000 years ago in the late Pleistocene extinction phases of North and South America. Its extinction has been linked to the decline and extinction of large herbivores. Hence, Smilodon could have been too specialized at hunting large prey and may have been unable to adapt. Indeed, by the Bølling–Allerød warming event and before the Younger Dryas cooling event, S. fatalis showed changes in cranial morphology that hint towards increased specialization in larger prey and/or evolution in response to competition with other carnivores. However, a 2012 study of Smilodon tooth wear found no evidence that they were limited by food resources. Other explanations include climate change and competition with Homo sapiens (who entered the Americas around the time Smilodon disappeared), or a combination of several factors, all of which apply to the general Late Pleistocene extinction event, rather than specifically to the extinction of the saber-toothed cats. One factor often cited here is the cooling in the Younger Dryas, which may have drastically reduced the habitable space for many species. In terms of human influence, there is evidence of a fire-induced regime change in Rancho la Brea that preceded the extirpation of megafauna in the area, with humans most likely responsible for the increase in fire intensity.
Writers of the first half of the twentieth century theorized that the last saber-toothed cats, Smilodon and Homotherium, became extinct through competition with the faster and more generalized felids that replaced them. It was even proposed that the saber-toothed predators were inferior to modern cats, as the ever-growing canines were thought to inhibit their owners from feeding properly. Since then, however, it has been shown that the diet of machairodontines such as Smilodon and Homotherium was diverse. They do not seem to have been limited to giant animals as prey, as suggested before, but fed on whatever was available, including bovines, equines and camelids. Additionally, non-machairodontine felids such as the American lion and Miracinonyx also became extinct during the Late Pleistocene, and saber-toothed and conical toothed felids had formerly coexisted for more than a million years. The fact that saber-teeth evolved many times in unrelated lineages also attests to the success of this feature.
The youngest direct radiocarbon date for S. fatalis differs from that of S. populator by thousands of years, the former just before the Younger Dryas cooling event and the latter by the early Holocene. The latest S. fatalis specimen recovered from the Rancho La Brea tar pits has been dated to 13,025 years ago. A specimen of S. fatalis from Iowa dates to 13,605–13,455 years Before Present (BP). The latest Smilodon populator remains found in the cave of Cueva del Medio, near the town of Soria, northeast Última Esperanza Province, Magallanes Region in southernmost Chile have been dated to 10,935–11,209 years ago. The most recent credible carbon-14 date for S. fatalis has been given as 11,130 BP. However, such radiocarbon dates are likely uncalibrated, meaning that they were not adjusted from calendar years to regular years. As a result, the dates appear younger than they actually are. Therefore, the S. fatalis specimen from Rancho La Brea is the youngest-recorded of the species, suggesting extinction before the Younger Dryas based on its last appearance in California as opposed to other regions where megafauna declined by the Younger Dryas.
| Biology and health sciences | Carnivora | null |
169180 | https://en.wikipedia.org/wiki/Typesetting | Typesetting | Typesetting is the composition of text for publication, display, or distribution by means of arranging physical type (or sort) in mechanical systems or glyphs in digital systems representing characters (letters and other symbols). Stored types are retrieved and ordered according to a language's orthography for visual display. Typesetting requires one or more fonts (which are widely but erroneously confused with and substituted for typefaces).
One significant effect of typesetting was that authorship of works could be spotted more easily, making it difficult for copiers who have not gained permission.
Pre-digital era
Manual typesetting
During much of the letterpress era, movable type was composed by hand for each page by workers called compositors. A tray with many dividers, called a case, contained cast metal sorts, each with a single letter or symbol, but backwards (so they would print correctly). The compositor assembled these sorts into words, then lines, then pages of text, which were then bound tightly together by a frame, making up a form or page. If done correctly, all letters were of the same height, and a flat surface of type was created. The form was placed in a press and inked, and then printed (an impression made) on paper. Metal type read backwards, from right to left, and a key skill of the compositor was their ability to read this backwards text.
Before computers were invented, and thus becoming computerized (or digital) typesetting, font sizes were changed by replacing the characters with a different size of type. In letterpress printing, individual letters and punctuation marks were cast on small metal blocks, known as "sorts," and then arranged to form the text for a page. The size of the type was determined by the size of the character on the face of the sort. A compositor would need to physically swap out the sorts for a different size to change the font size.
During typesetting, individual sorts are picked from a type case with the right hand, and set from left to right into a composing stick held in the left hand, appearing to the typesetter as upside down. As seen in the photo of the composing stick, a lower case 'q' looks like a 'd', a lower case 'b' looks like a 'p', a lower case 'p' looks like a 'b' and a lower case 'd' looks like a 'q'. This is reputed to be the origin of the expression "mind your p's and q's". It might just as easily have been "mind your b's and d's".
A forgotten but important part of the process took place after the printing: after cleaning with a solvent the expensive sorts had to be redistributed into the typecase - called sorting or dissing - so they would be ready for reuse. Errors in sorting could later produce misprints if, say, a p was put into the b compartment.
The diagram at right illustrates a cast metal sort: a face, b body or shank, c point size, 1 shoulder, 2 nick, 3 groove, 4 foot. Wooden printing sorts were used for centuries in combination with metal type. Not shown, and more the concern of the casterman, is the "set", or width of each sort. Set width, like body size, is measured in points.
In order to extend the working life of type, and to account for the finite sorts in a case of type, copies of forms were cast when anticipating subsequent printings of a text, freeing the costly type for other work. This was particularly prevalent in book and newspaper work where rotary presses required type forms to wrap an impression cylinder rather than set in the bed of a press. In this process, called stereotyping, the entire form is pressed into a fine matrix such as plaster of Paris or papier mâché to create a flong, from which a positive form is cast in type metal.
Advances such as the typewriter and computer would push the state of the art even farther ahead. Still, hand composition and letterpress printing have not fallen completely out of use, and since the introduction of digital typesetting, it has seen a revival as an artisanal pursuit. However, it is a small niche within the larger typesetting market.
Hot metal typesetting
The time and effort required to manually compose the text led to several efforts in the 19th century to produce mechanical typesetting. While some, such as the Paige compositor, met with limited success, by the end of the 19th century, several methods had been devised whereby an operator working a keyboard or other devices could produce the desired text. Most of the successful systems involved the in-house casting of the type to be used, hence are termed "hot metal" typesetting. The Linotype machine, invented in 1884, used a keyboard to assemble the casting matrices, and cast an entire line of type at a time (hence its name). In the Monotype System, a keyboard was used to punch a paper tape, which was then fed to control a casting machine. The Ludlow Typograph involved hand-set matrices, but otherwise used hot metal. By the early 20th century, the various systems were nearly universal in large newspapers and publishing houses.
Phototypesetting
Phototypesetting or "cold type" systems first appeared in the early 1960s and rapidly displaced continuous casting machines. These devices consisted of glass or film disks or strips (one per font) that spun in front of a light source to selectively expose characters onto light-sensitive paper. Originally they were driven by pre-punched paper tapes. Later they were connected to computer front ends.
One of the earliest electronic photocomposition systems was introduced by Fairchild Semiconductor. The typesetter typed a line of text on a Fairchild keyboard that had no display. To verify correct content of the line it was typed a second time. If the two lines were identical a bell rang and the machine produced a punched paper tape corresponding to the text. With the completion of a block of lines the typesetter fed the corresponding paper tapes into a phototypesetting device that mechanically set type outlines printed on glass sheets into place for exposure onto a negative film. Photosensitive paper was exposed to light through the negative film, resulting in a column of black type on white paper, or a galley. The galley was then cut up and used to create a mechanical drawing or paste up of a whole page. A large film negative of the page is shot and used to make plates for offset printing.
Digital era
The next generation of phototypesetting machines to emerge were those that generated characters on a cathode-ray tube display. Typical of the type were the Alphanumeric APS2 (1963), IBM 2680 (1967), I.I.I. VideoComp (1973?), Autologic APS5 (1975), and Linotron 202 (1978). These machines were the mainstay of phototypesetting for much of the 1970s and 1980s. Such machines could be "driven online" by a computer front-end system or took their data from magnetic tape. Type fonts were stored digitally on conventional magnetic disk drives.
Computers excel at automatically typesetting and correcting documents. Character-by-character, computer-aided phototypesetting was, in turn, rapidly rendered obsolete in the 1980s by fully digital systems employing a raster image processor to render an entire page to a single high-resolution digital image, now known as imagesetting.
The first commercially successful laser imagesetter, able to make use of a raster image processor, was the Monotype Lasercomp. ECRM, Compugraphic (later purchased by Agfa) and others rapidly followed suit with machines of their own.
Early minicomputer-based typesetting software introduced in the 1970s and early 1980s, such as Datalogics Pager, Penta, Atex, Miles 33, Xyvision, troff from Bell Labs, and IBM's Script product with CRT terminals, were better able to drive these electromechanical devices, and used text markup languages to describe type and other page formatting information. The descendants of these text markup languages include SGML, XML and HTML.
The minicomputer systems output columns of text on film for paste-up and eventually produced entire pages and signatures of 4, 8, 16 or more pages using imposition software on devices such as the Israeli-made Scitex Dolev. The data stream used by these systems to drive page layout on printers and imagesetters, often proprietary or specific to a manufacturer or device, drove development of generalized printer control languages, such as Adobe Systems' PostScript and Hewlett-Packard's PCL.
Computerized typesetting was so rare that BYTE magazine (comparing itself to "the proverbial shoemaker's children who went barefoot") did not use any computers in production until its August 1979 issue used a Compugraphics system for typesetting and page layout. The magazine did not yet accept articles on floppy disks, but hoped to do so "as matters progress". Before the 1980s, practically all typesetting for publishers and advertisers was performed by specialist typesetting companies. These companies performed keyboarding, editing and production of paper or film output, and formed a large component of the graphic arts industry. In the United States, these companies were located in rural Pennsylvania, New England or the Midwest, where labor was cheap and paper was produced nearby, but still within a few hours' travel time of the major publishing centers.
In 1985, with the new concept of WYSIWYG (for What You See Is What You Get) in text editing and word processing on personal computers, desktop publishing became available, starting with the Apple Macintosh, Aldus PageMaker (and later QuarkXPress) and PostScript and on the PC platform with Xerox Ventura Publisher under DOS as well as Pagemaker under Windows. Improvements in software and hardware, and rapidly lowering costs, popularized desktop publishing and enabled very fine control of typeset results much less expensively than the minicomputer dedicated systems. At the same time, word processing systems, such as Wang, WordPerfect and Microsoft Word, revolutionized office documents. They did not, however, have the typographic ability or flexibility required for complicated book layout, graphics, mathematics, or advanced hyphenation and justification rules (H and J).
By 2000, this industry segment had shrunk because publishers were now capable of integrating typesetting and graphic design on their own in-house computers. Many found the cost of maintaining high standards of typographic design and technical skill made it more economical to outsource to freelancers and graphic design specialists.
The availability of cheap or free fonts made the conversion to do-it-yourself easier, but also opened up a gap between skilled designers and amateurs. The advent of PostScript, supplemented by the PDF file format, provided a universal method of proofing designs and layouts, readable on major computers and operating systems.
QuarkXPress had enjoyed a market share of 95% in the 1990s, but lost its dominance to Adobe InDesign from the mid-2000s onward.
SCRIPT variants
IBM created and inspired a family of typesetting languages with names that were derivatives of the word "SCRIPT". Later versions of SCRIPT included advanced features, such as automatic generation of a table of contents and index, multicolumn page layout, footnotes, boxes, automatic hyphenation and spelling verification.
NSCRIPT was a port of SCRIPT to OS and TSO from CP-67/CMS SCRIPT.
Waterloo Script was created at the University of Waterloo (UW) later. One version of SCRIPT was created at MIT and the AA/CS at UW took over project development in 1974. The program was first used at UW in 1975. In the 1970s, SCRIPT was the only practical way to word process and format documents using a computer. By the late 1980s, the SCRIPT system had been extended to incorporate various upgrades.
The initial implementation of SCRIPT at UW was documented in the May 1975 issue of the Computing Centre Newsletter, which noted some the advantages of using SCRIPT:
The article also pointed out SCRIPT had over 100 commands to assist in formatting documents, though 8 to 10 of these commands were sufficient to complete most formatting jobs. Thus, SCRIPT had many of the capabilities computer users generally associate with contemporary word processors.
SCRIPT/VS was a SCRIPT variant developed at IBM in the 1980s.
DWScript is a version of SCRIPT for MS-DOS, named after its author, D. D. Williams, but was never released to the public and only used internally by IBM.
Script is still available from IBM as part of the Document Composition Facility for the z/OS operating system.
SGML and XML systems
The standard generalized markup language (SGML) was based upon IBM Generalized Markup Language (GML). GML was a set of macros on top of IBM Script. DSSSL is an international standard developed to provide a stylesheets for SGML documents.
XML is a successor of SGML. XSL-FO is most often used to generate PDF files from XML files.
The arrival of SGML/XML as the document model made other typesetting engines popular.
Such engines include Datalogics Pager, Penta, Miles 33's OASYS, Xyvision's XML Professional Publisher, FrameMaker, and Arbortext. XSL-FO compatible engines include Apache FOP, Antenna House Formatter, and RenderX's XEP.
These products allow users to program their SGML/XML typesetting process with the help of scripting languages.
YesLogic's Prince is another one, which is based on CSS Paged Media.
Troff and successors
During the mid-1970s, Joe Ossanna, working at Bell Laboratories, wrote the troff typesetting program to drive a Wang C/A/T phototypesetter owned by the Labs; it was later enhanced by Brian Kernighan to support output to different equipment, such as laser printers. While its use has fallen off, it is still included with a number of Unix and Unix-like systems, and has been used to typeset a number of high-profile technical and computer books. Some versions, as well as a GNU work-alike called groff, are now open source.
TeX and LaTeX
The TeX system, developed by Donald E. Knuth at the end of the 1970s, is another widespread and powerful automated typesetting system that has set high standards, especially for typesetting mathematics. LuaTeX and LuaLaTeX are variants of TeX and of LaTeX scriptable in Lua. TeX is considered fairly difficult to learn on its own, and deals more with appearance than structure. The LaTeX macro package, written by Leslie Lamport at the beginning of the 1980s, offered a simpler interface and an easier way to systematically encode the structure of a document. LaTeX markup is widely used in academic circles for published papers and books. Although standard TeX does not provide an interface of any sort, there are programs that do. These programs include Scientific Workplace and LyX, which are graphical/interactive editors; TeXmacs, while being an independent typesetting system, can also aid the preparation of TeX documents through its export capability.
Other text formatters
GNU TeXmacs (whose name is a combination of TeX and Emacs, although it is independent from both of these programs) is a typesetting system which is at the same time a WYSIWYG word processor.
SILE borrows some algorithms from TeX and relies on other libraries such as HarfBuzz and ICU, with an extensible core engine developed in Lua.
By default, SILE's input documents can be composed in a custom LaTeX-inspired markup (SIL) or in XML. Via the adjunction of 3rd-party modules, composition in Markdown or Djot is also possible.
| Technology | Printing | null |
169191 | https://en.wikipedia.org/wiki/Shape | Shape | A shape is a graphical representation of an object's form or its external boundary, outline, or external surface. It is distinct from other object properties, such as color, texture, or material type.
In geometry, shape excludes information about the object's position, size, orientation and chirality.
A figure is a representation including both shape and size (as in, e.g., figure of the Earth).
A plane shape or plane figure is constrained to lie on a plane, in contrast to solid 3D shapes.
A two-dimensional shape or two-dimensional figure (also: 2D shape or 2D figure) may lie on a more general curved surface (a two-dimensional space).
Classification of simple shapes
Some simple shapes can be put into broad categories. For instance, polygons are classified according to their number of edges as triangles, quadrilaterals, pentagons, etc. Each of these is divided into smaller categories; triangles can be equilateral, isosceles, obtuse, acute, scalene, etc. while quadrilaterals can be rectangles, rhombi, trapezoids, squares, etc.
Other common shapes are points, lines, planes, and conic sections such as ellipses, circles, and parabolas.
Among the most common 3-dimensional shapes are polyhedra, which are shapes with flat faces; ellipsoids, which are egg-shaped or sphere-shaped objects; cylinders; and cones.
If an object falls into one of these categories exactly or even approximately, we can use it to describe the shape of the object. Thus, we say that the shape of a manhole cover is a disk, because it is approximately the same geometric object as an actual geometric disk.
In geometry
A geometric shape consists of the geometric information which remains when location, scale, orientation and reflection are removed from the description of a geometric object. That is, the result of moving a shape around, enlarging it, rotating it, or reflecting it in a mirror is the same shape as the original, and not a distinct shape.
Many two-dimensional geometric shapes can be defined by a set of points or vertices and lines connecting the points in a closed chain, as well as the resulting interior points. Such shapes are called polygons and include triangles, squares, and pentagons. Other shapes may be bounded by curves such as the circle or the ellipse.
Many three-dimensional geometric shapes can be defined by a set of vertices, lines connecting the vertices, and two-dimensional faces enclosed by those lines, as well as the resulting interior points. Such shapes are called polyhedrons
and include cubes as well as pyramids such as tetrahedrons. Other three-dimensional shapes may be bounded by curved surfaces, such as the ellipsoid and the sphere.
A shape is said to be convex if all of the points on a line segment between any two of its points are also part of the shape.
Properties
There are multiple ways to compare the shapes of two objects:
Congruence: Two objects are congruent if one can be transformed into the other by a sequence of rotations, translations, and/or reflections.
Similarity: Two objects are similar if one can be transformed into the other by a uniform scaling, together with a sequence of rotations, translations, and/or reflections.
Isotopy: Two objects are isotopic if one can be transformed into the other by a sequence of deformations that do not tear the object or put holes in it.
Sometimes, two similar or congruent objects may be regarded as having a different shape if a reflection is required to transform one into the other. For instance, the letters "b" and "d" are a reflection of each other, and hence they are congruent and similar, but in some contexts they are not regarded as having the same shape. Sometimes, only the outline or external boundary of the object is considered to determine its shape. For instance, a hollow sphere may be considered to have the same shape as a solid sphere. Procrustes analysis is used in many sciences to determine whether or not two objects have the same shape, or to measure the difference between two shapes. In advanced mathematics, quasi-isometry can be used as a criterion to state that two shapes are approximately the same.
Simple shapes can often be classified into basic geometric objects such as a line, a curve, a plane, a plane figure (e.g. square or circle), or a solid figure (e.g. cube or sphere). However, most shapes occurring in the physical world are complex. Some, such as plant structures and coastlines, may be so complicated as to defy traditional mathematical description – in which case they may be analyzed by differential geometry, or as fractals.
Some common shapes include: Circle, Square, Triangle, Rectangle, Oval, Star (polygon), Rhombus, Semicircle.
Regular polygons starting at pentagon follow the naming convention of the Greek derived prefix with '-gon' suffix: Pentagon, Hexagon, Heptagon, Octagon, Nonagon, Decagon... See polygon
Equivalence of shapes
In geometry, two subsets of a Euclidean space have the same shape if one can be transformed to the other by a combination of translations, rotations (together also called rigid transformations), and uniform scalings. In other words, the shape of a set of points is all the geometrical information that is invariant to translations, rotations, and size changes. Having the same shape is an equivalence relation, and accordingly a precise mathematical definition of the notion of shape can be given as being an equivalence class of subsets of a Euclidean space having the same shape.
Mathematician and statistician David George Kendall writes:
In this paper ‘shape’ is used in the vulgar sense, and means what one would normally expect it to mean. [...] We here define ‘shape’ informally as ‘all the geometrical information that remains when location, scale and rotational effects are filtered out from an object.’
Shapes of physical objects are equal if the subsets of space these objects occupy satisfy the definition above. In particular, the shape does not depend on the size and placement in space of the object. For instance, a "d" and a "p" have the same shape, as they can be perfectly superimposed if the "d" is translated to the right by a given distance, rotated upside down and magnified by a given factor (see Procrustes superimposition for details). However, a mirror image could be called a different shape. For instance, a "b" and a "p" have a different shape, at least when they are constrained to move within a two-dimensional space like the page on which they are written. Even though they have the same size, there's no way to perfectly superimpose them by translating and rotating them along the page. Similarly, within a three-dimensional space, a right hand and a left hand have a different shape, even if they are the mirror images of each other. Shapes may change if the object is scaled non-uniformly. For example, a sphere becomes an ellipsoid when scaled differently in the vertical and horizontal directions. In other words, preserving axes of symmetry (if they exist) is important for preserving shapes. Also, shape is determined by only the outer boundary of an object.
Congruence and similarity
Objects that can be transformed into each other by rigid transformations and mirroring (but not scaling) are congruent. An object is therefore congruent to its mirror image (even if it is not symmetric), but not to a scaled version. Two congruent objects always have either the same shape or mirror image shapes, and have the same size.
Objects that have the same shape or mirror image shapes are called geometrically similar, whether or not they have the same size. Thus, objects that can be transformed into each other by rigid transformations, mirroring, and uniform scaling are similar. Similarity is preserved when one of the objects is uniformly scaled, while congruence is not. Thus, congruent objects are always geometrically similar, but similar objects may not be congruent, as they may have different size.
Homeomorphism
A more flexible definition of shape takes into consideration the fact that realistic shapes are often deformable, e.g. a person in different postures, a tree bending in the wind or a hand with different finger positions.
One way of modeling non-rigid movements is by homeomorphisms. Roughly speaking, a homeomorphism is a continuous stretching and bending of an object into a new shape. Thus, a square and a circle are homeomorphic to each other, but a sphere and a donut are not. An often-repeated mathematical joke is that topologists cannot tell their coffee cup from their donut, since a sufficiently pliable donut could be reshaped to the form of a coffee cup by creating a dimple and progressively enlarging it, while preserving the donut hole in a cup's handle.
A described shape has external lines that you can see and make up the shape. If you were putting your coordinates on a coordinate graph you could draw lines to show where you can see a shape, however not every time you put coordinates in a graph as such you can make a shape. This shape has a outline and boundary so you can see it and is not just regular dots on a regular paper.
Shape analysis
The above-mentioned mathematical definitions of rigid and non-rigid shape have arisen in the field of statistical shape analysis. In particular, Procrustes analysis is a technique used for comparing shapes of similar objects (e.g. bones of different animals), or measuring the deformation of a deformable object. Other methods are designed to work with non-rigid (bendable) objects, e.g. for posture independent shape retrieval (see for example Spectral shape analysis).
Similarity classes
All similar triangles have the same shape. These shapes can be classified using complex numbers , , for the vertices, in a method advanced by J.A. Lester and Rafael Artzy. For example, an equilateral triangle can be expressed by the complex numbers 0, 1, representing its vertices. Lester and Artzy call the ratio
the shape of triangle . Then the shape of the equilateral triangle is
For any affine transformation of the complex plane, a triangle is transformed but does not change its shape. Hence shape is an invariant of affine geometry.
The shape depends on the order of the arguments of function S, but permutations lead to related values. For instance,
Also
Combining these permutations gives Furthermore,
These relations are "conversion rules" for shape of a triangle.
The shape of a quadrilateral is associated with two complex numbers , . If the quadrilateral has vertices , , , , then and . Artzy proves these propositions about quadrilateral shapes:
If then the quadrilateral is a parallelogram.
If a parallelogram has , then it is a rhombus.
When and , then the quadrilateral is square.
If and , then the quadrilateral is a trapezoid.
A polygon has a shape defined by n − 2 complex numbers The polygon bounds a convex set when all these shape components have imaginary components of the same sign.
Human perception of shapes
Human vision relies on a wide range of shape representations. Some psychologists have theorized that humans mentally break down images into simple geometric shapes (e.g., cones and spheres) called geons. Meanwhile, others have suggested shapes are decomposed into features or dimensions that describe the way shapes tend to vary, like their segmentability, compactness and spikiness. When comparing shape similarity, however, at least 22 independent dimensions are needed to account for the way natural shapes vary.
There is also clear evidence that shapes guide human attention.
| Mathematics | Geometry | null |
169193 | https://en.wikipedia.org/wiki/Twig | Twig | A twig is a thin, often short, branch of a tree or bush.
The buds on the twig are an important diagnostic characteristic, as are the abscission scars where the leaves have fallen away. The color, texture, and patterning of the twig bark are also important, in addition to the thickness and nature of any pith of the twig.
There are two types of twigs: vegetative twigs and fruiting spurs. Fruiting spurs are specialized twigs that generally branch off the sides of branches and are stubby and slow-growing, with many annular ring markings from seasons past. The twig's age and rate of growth can be determined by counting the winter terminal bud scale scars, or annular ring marking, across the diameter of the twig.
Uses
Twigs can be useful in starting a fire. They can be used as kindling wood, bridging the gap between highly flammable tinder (dry grass and leaves) and firewood. This is due to their high amounts of stored carbon dioxide used in photosynthesis.
Twigs are a feature of tool use by non-humans. For example, chimpanzees have been observed using twigs to go "fishing" for termites, and elephants have been reported using twigs to scratch parts of their ears and mouths which could not be reached by rubbing against a tree.
| Biology and health sciences | Plant stem | Biology |
169197 | https://en.wikipedia.org/wiki/Acrux | Acrux | Acrux is the brightest star in the southern constellation of Crux. It has the Bayer designation α Crucis, which is Latinised to Alpha Crucis and abbreviated Alpha Cru or α Cru. With a combined visual magnitude of +0.76, it is the 13th-brightest star in the night sky. It is the most southerly star of the asterism known as the Southern Cross and is the southernmost first-magnitude star, 2.3 degrees more southerly than Alpha Centauri. This system is located at a distance of 321 light-years from the Sun.
To the naked eye Acrux appears as a single star, but it is actually a multiple star system containing six components. Through optical telescopes, Acrux appears as a triple star, whose two brightest components are visually separated by about 4 arcseconds and are known as Acrux A and Acrux B, α1 Crucis and α2 Crucis, or α Crucis A and α Crucis B. Both components are B-type stars, and are many times more massive and luminous than the Sun. This system was the second ever to be recognized as a binary, in 1685 by a Jesuit priest. α1 Crucis is itself a spectroscopic binary with components designated α Crucis Aa (officially named Acrux, historically the name of the entire system) and α Crucis Ab. Its two component stars orbit every 76 days at a separation of about 1 astronomical unit (AU). HR 4729, also known as Acrux C, is a more distant companion, forming a triple star through small telescopes. C is also a spectroscopic binary, which brings the total number of stars in the system to at least five.
Nomenclature
α Crucis (Latinised to Alpha Crucis) is the system's Bayer designation; α1 and α2 Crucis, those of its two main components stars. The designations of these two constituents as Acrux A and Acrux B and those of A's components—Acrux Aa and Acrux Ab—derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
The historical name Acrux for α1 Crucis is an "Americanism" coined in the 19th century, but entering common use only by the mid 20th century. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN states that in the case of multiple stars the name should be understood to be attributed to the brightest component by visual brightness. The WGSN approved the name Acrux for the star Acrux Aa on 20 July 2016 and it is now so entered in the IAU Catalog of Star Names.
Since Acrux is at −63° declination, making it the southernmost first-magnitude star, it is only visible south of latitude 27° North. It barely rises from cities such as Miami, United States, or Karachi, Pakistan (both around 25°N) and not at all from New Orleans, United States, or Cairo, Egypt (both about 30°N). Because of Earth's axial precession, the star was visible to ancient Hindu astronomers in India who named it Tri-shanku. It was also visible to the ancient Romans and Greeks, who regarded it as part of the constellation of Centaurus.
In Chinese, (, "Cross"), refers to an asterism consisting of Acrux, Mimosa, Gamma Crucis and Delta Crucis. Consequently, Acrux itself is known as (, "the Second Star of Cross").
This star is known as Estrela de Magalhães ("Star of Magellan") in Portuguese.
Stellar properties
The two components, α1 and α2 Crucis, are separated by 4 arcseconds. α1 is magnitude 1.40 and α2 is magnitude 2.09, both early class B stars, with surface temperatures of about 28,000 and , respectively. Their luminosities are 25,000 and 16,000 times that of the Sun. α1 and α2 orbit over such a long period that motion is only barely seen. From their minimum separation of 430 astronomical units, the period is estimated to be around 1,500 years.
α1 is itself a spectroscopic binary star, with its components thought to be around 14 and 10 times the mass of the Sun and orbiting in only 76 days at a separation of about . The masses of α2 and the brighter component of α1 suggest that the stars will someday expand into blue and red supergiants (similar to Betelgeuse and Antares) before exploding as supernovae. Component Ab may perform electron capture in the degenerate O+Ne+Mg core and trigger a supernova explosion, otherwise it will become a massive white dwarf.
Photometry with the TESS satellite has shown that one of the stars in the α Crucis system is a β Cephei variable, although α1 and α2 Crucis are too close for TESS to resolve and determine which one is the pulsator.
Rizzuto and colleagues determined in 2011 that the α Crucis system was 66% likely to be a member of the Lower Centaurus–Crux sub-group of the Scorpius–Centaurus association. It was not previously seen to be a member of the group. A bow shock is present around α Crucis, and is visible in the infrared spectrum, but is not aligned with α Crucis; the bow shock likely formed from large-scale motions in the interstellar matter.
The cooler, less-luminous B-class star HR 4729 (HD 108250) lies 90 arcseconds away from triple star system α Crucis and shares its motion through space, suggesting it may be gravitationally bound to it, and it is therefore generally assumed to be physically associated. It is itself a spectroscopic binary system, sometimes catalogued as component C (Acrux C) of the Acrux multiple system. Another fainter visual companion listed as component D or Acrux D. A further seven faint stars are also listed as companions out to a distance of about two arc-minutes.
On 2 October 2008, the Cassini–Huygens spacecraft resolved three of the components (A, B and C) of the multiple star system as Saturn's disk occulted it.
In culture
Acrux is represented in the flags of Australia, New Zealand, Samoa, and Papua New Guinea as one of five stars that compose the Southern Cross. It is also featured in the flag of Brazil, along with 26 other stars, each of which represents a state; Acrux represents the state of São Paulo. As of 2015, it is also represented on the cover of the Brazilian passport.
The Brazilian oceanographic research vessel Alpha Crucis is named after the star.
| Physical sciences | Notable stars | Astronomy |
169208 | https://en.wikipedia.org/wiki/Marsh | Marsh | In ecology, a marsh is a wetland that is dominated by herbaceous plants rather than by woody plants. More in general, the word can be used for any low-lying and seasonally waterlogged terrain. In Europe and in agricultural literature low-lying meadows that require draining and embanked polderlands are also referred to as marshes or marshland.
Marshes can often be found at the edges of lakes and streams, where they form a transition between the aquatic and terrestrial ecosystems. They are often dominated by grasses, rushes or reeds. If woody plants are present they tend to be low-growing shrubs, and the marsh is sometimes called a carr. This form of vegetation is what differentiates marshes from other types of wetland such as swamps, which are dominated by trees, and mires, which are wetlands that have accumulated deposits of acidic peat.
Marshes provide habitats for many kinds of invertebrates, fish, amphibians, waterfowl and aquatic mammals. This biological productivity means that marshes contain 0.1% of global sequestered terrestrial carbon. Moreover, they have an outsized influence on climate resilience of coastal areas and waterways, absorbing high tides and other water changes due to extreme weather. Though some marshes are expected to migrate upland, most natural marshlands will be threatened by sea level rise and associated erosion.
Basic information
Marshes provide a habitat for many species of plants, animals, and insects that have adapted to living in flooded conditions or other environments. The plants must be able to survive in wet mud with low oxygen levels. Many of these plants, therefore, have aerenchyma, channels within the stem that allow air to move from the leaves into the rooting zone. Marsh plants also tend to have rhizomes for underground storage and reproduction. Common examples include cattails, sedges, papyrus and sawgrass. Aquatic animals, from fish to salamanders, are generally able to live with a low amount of oxygen in the water. Some can obtain oxygen from the air instead, while others can live indefinitely in conditions of low oxygen. The pH in marshes tends to be neutral to alkaline, as opposed to bogs, where peat accumulates under more acid conditions.
Values and ecosystem services
Marshes provide habitats for many kinds of invertebrates, fish, amphibians, waterfowl and aquatic mammals. Marshes have extremely high levels of biological production, some of the highest in the world, and therefore are important in supporting fisheries.
Marshes also improve water quality by acting as a sink to filter pollutants and sediment from the water that flows through them. Marshes partake in water purification by providing nutrient and pollution consumption. Marshes (and other wetlands) are able to absorb water during periods of heavy rainfall and slowly release it into waterways and therefore reduce the magnitude of flooding. Marshes also provide the services of tourism, recreation, education, and research.
Types of marshes
Marshes differ depending mainly on their location and salinity. These factors greatly influence the range and scope of animal and plant life that can survive and reproduce in these environments. The three main types of marsh are salt marshes, freshwater tidal marshes, and freshwater marshes. These three can be found worldwide, and each contains a different set of organisms.
Salt marshes
Saltwater marshes are found around the world in mid to high latitudes, wherever there are sections of protected coastline. They are located close enough to the shoreline that the motion of the tides affects them, and, sporadically, they are covered with water. They flourish where the rate of sediment buildup is greater than the rate at which the land level is sinking. Salt marshes are dominated by specially adapted rooted vegetation, primarily salt-tolerant grasses.
Salt marshes are most commonly found in lagoons, estuaries, and on the sheltered side of a shingle or sandspit. The currents there carry the fine particles around to the quiet side of the spit, and sediment begins to build up. These locations allow the marshes to absorb the excess nutrients from the water running through them before they reach the oceans and estuaries. These marshes are slowly declining. Coastal development and urban sprawl have caused significant loss of these essential habitats.
Freshwater tidal marshes
Although considered a freshwater marsh, the ocean tides affect this form of marsh. However, without the stresses of salinity at work in its saltwater counterpart, the diversity of the plants and animals that live in and use freshwater tidal marshes is much higher than in salt marshes. The most severe threats to this form of marsh are the increasing size and pollution of the cities surrounding them.
Freshwater marshes
Ranging greatly in size and geographic location, freshwater marshes make up North America's most common form of wetland. They are also the most diverse of the three types of marsh. Some examples of freshwater marsh types in North America are:
Wet meadows
Wet meadows occur in shallow lake basins, low-lying depressions, and the land between shallow marshes and upland areas. They also happen on the edges of large lakes and rivers. Wet meadows often have very high plant diversity and high densities of buried seeds. They are regularly flooded but are often dry in the summer.
Vernal pools
Vernal pools are a type of marsh found only seasonally in shallow depressions in the land. They can be covered in shallow water, but in the summer and fall, they can be completely dry. In western North America, vernal pools tend to form in open grasslands, whereas in the east, they often occur in forested landscapes. Further south, vernal pools form in pine savannas and flatwoods. Many amphibian species depend upon vernal pools for spring breeding; these ponds provide a habitat free from fish, which eat the eggs and young of amphibians. An example is the endangered gopher frog. Similar temporary ponds occur in other world ecosystems, where they may have local names. However, vernal pool can be applied to all such temporary pool ecosystems.
Playa lakes
Playa lakes are a form of shallow freshwater marsh in the southern high plains of the United States. Like vernal pools, they are only present at certain times of the year and generally have a circular shape. As the playa dries during the summer, conspicuous plant zonation develops along the shoreline.
Prairie potholes
Prairie potholes are found in northern North America, such as the Prairie Pothole Region. Glaciers once covered these landscapes, and as a result, shallow depressions were formed in great numbers. These depressions fill with water in the spring. They provide important breeding habitats for many species of waterfowl. Some pools only occur seasonally, while others retain enough water to be present all year.
Riverine wetlands
Many kinds of marsh occur along the fringes of large rivers. The different types are produced by factors such as water level, nutrients, ice scour, and waves.
Embanked marshlands
Large tracts of tidal marsh have been embanked and artificially drained. They are usually known by the Dutch name of polders. In Northern Germany and Scandinavia they are called Marschland, Marsch or marsk; in France marais maritime. In the Netherlands and Belgium, they are designated as marine clay districts. In East Anglia, a region in the East of England, the embanked marshes are also known as Fens.
Restoration
Some areas have already lost 90% of their wetlands, including marshes. They have been drained to create agricultural land or filled to accommodate urban sprawl. Restoration is returning marshes to the landscape to replace those lost in the past. Restoration can be done on a large scale, such as by allowing rivers to flood naturally in the spring, or on a small scale by returning wetlands to urban landscapes.
| Physical sciences | Wetlands | null |
169262 | https://en.wikipedia.org/wiki/Intuitionistic%20logic | Intuitionistic logic | Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.
Formalized intuitionistic logic was originally developed by Arend Heyting to provide a formal basis for L. E. J. Brouwer's programme of intuitionism. From a proof-theoretic perspective, Heyting’s calculus is a restriction of classical logic in which the law of excluded middle and double negation elimination have been removed. Excluded middle and double negation elimination can still be proved for some propositions on a case by case basis, however, but do not hold universally as they do with classical logic. The standard explanation of intuitionistic logic is the BHK interpretation.
Several systems of semantics for intuitionistic logic have been studied. One of these semantics mirrors classical Boolean-valued semantics but uses Heyting algebras in place of Boolean algebras. Another semantics uses Kripke models. These, however, are technical means for studying Heyting’s deductive system rather than formalizations of Brouwer’s original informal semantic intuitions. Semantical systems claiming to capture such intuitions, due to offering meaningful concepts of “constructive truth” (rather than merely validity or provability), are Kurt Gödel’s dialectica interpretation, Stephen Cole Kleene’s realizability, Yurii Medvedev’s logic of finite problems, or Giorgi Japaridze’s computability logic. Yet such semantics persistently induce logics properly stronger than Heyting’s logic. Some authors have argued that this might be an indication of inadequacy of Heyting’s calculus itself, deeming the latter incomplete as a constructive logic.
Mathematical constructivism
In the semantics of classical logic, propositional formulae are assigned truth values from the two-element set ("true" and "false" respectively), regardless of whether we have direct evidence for either case. This is referred to as the 'law of excluded middle', because it excludes the possibility of any truth value besides 'true' or 'false'. In contrast, propositional formulae in intuitionistic logic are not assigned a definite truth value and are only considered "true" when we have direct evidence, hence proof. We can also say, instead of the propositional formula being "true" due to direct evidence, that it is inhabited by a proof in the Curry–Howard sense. Operations in intuitionistic logic therefore preserve justification, with respect to evidence and provability, rather than truth-valuation.
Intuitionistic logic is a commonly-used tool in developing approaches to constructivism in mathematics. The use of constructivist logics in general has been a controversial topic among mathematicians and philosophers (see, for example, the Brouwer–Hilbert controversy). A common objection to their use is the above-cited lack of two central rules of classical logic, the law of excluded middle and double negation elimination. David Hilbert considered them to be so important to the practice of mathematics that he wrote:
Intuitionistic logic has found practical use in mathematics despite the challenges presented by the inability to utilize these rules. One reason for this is that its restrictions produce proofs that have the disjunction and existence properties, making it also suitable for other forms of mathematical constructivism. Informally, this means that if there is a constructive proof that an object exists, that constructive proof may be used as an algorithm for generating an example of that object, a principle known as the Curry–Howard correspondence between proofs and algorithms. One reason that this particular aspect of intuitionistic logic is so valuable is that it enables practitioners to utilize a wide range of computerized tools, known as proof assistants. These tools assist their users in the generation and verification of large-scale proofs, whose size usually precludes the usual human-based checking that goes into publishing and reviewing a mathematical proof. As such, the use of proof assistants (such as Agda or Coq) is enabling modern mathematicians and logicians to develop and prove extremely complex systems, beyond those that are feasible to create and check solely by hand. One example of a proof that was impossible to satisfactorily verify without formal verification is the famous proof of the four color theorem. This theorem stumped mathematicians for more than a hundred years, until a proof was developed that ruled out large classes of possible counterexamples, yet still left open enough possibilities that a computer program was needed to finish the proof. That proof was controversial for some time, but, later, it was verified using Coq.
Syntax
The syntax of formulas of intuitionistic logic is similar to propositional logic or first-order logic. However, intuitionistic connectives are not definable in terms of each other in the same way as in classical logic, hence their choice matters. In intuitionistic propositional logic (IPL) it is customary to use →, ∧, ∨, ⊥ as the basic connectives, treating ¬A as an abbreviation for . In intuitionistic first-order logic both quantifiers ∃, ∀ are needed.
Hilbert-style calculus
Intuitionistic logic can be defined using the following Hilbert-style calculus. This is similar to a way of axiomatizing classical propositional logic.
In propositional logic, the inference rule is modus ponens
MP: from and infer
and the axioms are
THEN-1:
THEN-2:
AND-1:
AND-2:
AND-3:
OR-1:
OR-2:
OR-3:
FALSE:
To make this a system of first-order predicate logic, the generalization rules
-GEN: from infer , if is not free in
-GEN: from infer , if is not free in
are added, along with the axioms
PRED-1: , if the term is free for substitution for the variable in (i.e., if no occurrence of any variable in becomes bound in )
PRED-2: , with the same restriction as for PRED-1
Negation
If one wishes to include a connective for negation rather than consider it an abbreviation for , it is enough to add:
NOT-1':
NOT-2':
There are a number of alternatives available if one wishes to omit the connective (false). For example, one may replace the three axioms FALSE, NOT-1', and NOT-2' with the two axioms
NOT-1:
NOT-2:
as at . Alternatives to NOT-1 are or .
Equivalence
The connective for equivalence may be treated as an abbreviation, with standing for . Alternatively, one may add the axioms
IFF-1:
IFF-2:
IFF-3:
IFF-1 and IFF-2 can, if desired, be combined into a single axiom using conjunction.
Sequent calculus
Gerhard Gentzen discovered that a simple restriction of his system LK (his sequent calculus for classical logic) results in a system that is sound and complete with respect to intuitionistic logic. He called this system LJ. In LK any number of formulas is allowed to appear on the conclusion side of a sequent; in contrast LJ allows at most one formula in this position.
Other derivatives of LK are limited to intuitionistic derivations but still allow multiple conclusions in a sequent. LJ' is one example.
Theorems
The theorems of the pure logic are the statements provable from the axioms and inference rules. For example, using THEN-1 in THEN-2 reduces it to . A formal proof of the latter using the Hilbert system is given on that page. With for , this in turn implies . In words: "If being the case implies that is absurd, then if does hold, one has that is not the case." Due to the symmetry of the statement, one in fact obtained
When explaining the theorems of intuitionistic logic in terms of classical logic, it can be understood as a weakening thereof: It is more conservative in what it allows a reasoner to infer, while not permitting any new inferences that could not be made under classical logic. Each theorem of intuitionistic logic is a theorem in classical logic, but not conversely. Many tautologies in classical logic are not theorems in intuitionistic logicin particular, as said above, one of intuitionistic logic's chief aims is to not affirm the law of the excluded middle so as to vitiate the use of non-constructive proof by contradiction, which can be used to furnish existence claims without providing explicit examples of the objects that it proves exist.
Double negations
A double negation does not affirm the law of the excluded middle (PEM); while it is not necessarily the case that PEM is upheld in any context, no counterexample can be given either. Such a counterexample would be an inference (inferring the negation of the law for a certain proposition) disallowed under classical logic and thus PEM is not allowed in a strict weakening like intuitionistic logic. Formally, it is a simple theorem that for any two propositions. By considering any established to be false this indeed shows that the double negation of the law is retained as a tautology already in minimal logic. This means any is established to be inconsistent and the propositional calculus is in turn always compatible with classical logic.
When assuming the law of excluded middle implies a proposition, then by applying contraposition twice and using the double-negated excluded middle, one may prove double-negated variants of various strictly classical tautologies. The situation is more intricate for predicate logic formulas, when some quantified expressions are being negated.
Double negation and implication
Akin to the above, from modus ponens in the form follows . The relation between them may always be used to obtain new formulas: A weakened premise makes for a strong implication, and vice versa. For example, note that if holds, then so does , but the schema in the other direction would imply the double-negation elimination principle. Propositions for which double-negation elimination is possible are also called stable. Intuitionistic logic proves stability only for restricted types of propositions. A formula for which excluded middle holds can be proven stable using the disjunctive syllogism, which is discussed more thoroughly below. The converse does however not hold in general, unless the excluded middle statement at hand is stable itself.
An implication can be proven to be equivalent to , whatever the propositions. As a special case, it follows that propositions of negated form ( here) are stable, i.e. is always valid.
In general, is stronger than , which is stronger than , which itself implies the three equivalent statements , and . Using the disjunctive syllogism, the previous four are indeed equivalent. This also gives an intuitionistically valid derivation of , as it is thus equivalent to an identity.
When expresses a claim, then its double-negation merely expresses the claim that a refutation of would be inconsistent. Having proven such a mere double-negation also still aids in negating other statements through negation introduction, as then . A double-negated existential statement does not denote existence of an entity with a property, but rather the absurdity of assumed non-existence of any such entity. Also all the principles in the next section involving quantifiers explain use of implications with hypothetical existence as premise.
Formula translation
Weakening statements by adding two negations before existential quantifiers (and atoms) is also the core step in the double-negation translation. It constitutes an embedding of classical first-order logic into intuitionistic logic: a first-order formula is provable in classical logic if and only if its Gödel–Gentzen translation is provable intuitionistically. For example, any theorem of classical propositional logic of the form has a proof consisting of an intuitionistic proof of followed by one application of double-negation elimination. Intuitionistic logic can thus be seen as a means of extending classical logic with constructive semantics.
Non-interdefinability of operators
Already minimal logic easily proves the following theorems, relating conjunction resp. disjunction to the implication using negation:
, a weakened variant of the disjunctive syllogism
resp.
and similarly
Indeed, stronger variants of these still do hold - for example the antecedents may be double-negated, as noted, or all may be replaced by on the antecedent sides, as will be discussed. However, neither of these five implications can be reversed without immediately implying excluded middle (consider for ) resp. double-negation elimination (consider true ). Hence, the left hand sides do not constitute a possible definition of the right hand sides.
In contrast, in classical propositional logic it is possible to take one of those three connectives plus negation as primitive and define the other two in terms of it, in this way. Such is done, for example, in Łukasiewicz's three axioms of propositional logic.
It is even possible to define all in terms of a sole sufficient operator such as the Peirce arrow (NOR) or Sheffer stroke (NAND). Similarly, in classical first-order logic, one of the quantifiers can be defined in terms of the other and negation.
These are fundamentally consequences of the law of bivalence, which makes all such connectives merely Boolean functions.
The law of bivalence is not required to hold in intuitionistic logic. As a result, none of the basic connectives can be dispensed with, and the above axioms are all necessary. So most of the classical identities between connectives and quantifiers are only theorems of intuitionistic logic in one direction. Some of the theorems go in both directions, i.e. are equivalences, as subsequently discussed.
Existential vs. universal quantification
Firstly, when is not free in the proposition , then
When the domain of discourse is empty, then by the principle of explosion, an existential statement implies anything. When the domain contains at least one term, then assuming excluded middle for , the inverse of the above implication becomes provably too, meaning the two sides become equivalent. This inverse direction is equivalent to the drinker's paradox (DP). Moreover, an existential and dual variant of it is given by the independence of premise principle (IP). Classically, the statement above is moreover equivalent to a more disjunctive form discussed further below. Constructively, existence claims are however generally harder to come by.
If the domain of discourse is not empty and is moreover independent of , such principles are equivalent to formulas in the propositional calculus. Here, the formula then just expresses the identity . This is the curried form of modus ponens , which in the special the case with as a false proposition results in the law of non-contradiction principle .
Considering a false proposition for the original implication results in the important
In words: "If there exists an entity that does not have the property , then the following is refuted: Each entity has the property ."
The quantifier formula with negations also immediately follows from the non-contradiction principle derived above, each instance of which itself already follows from the more particular . To derive a contradiction given , it suffices to establish its negation (as opposed to the stronger ) and this makes proving double-negations valuable also. By the same token, the original quantifier formula in fact still holds with weakened to . And so, in fact, a stronger theorem holds:
In words: "If there exists an entity that does not have the property , then the following is refuted: For each entity, one is not able to prove that it does not have the property ".
Secondly,
where similar considerations apply. Here the existential part is always a hypothesis and this is an equivalence. Considering the special case again,
The proven conversion can be used to obtain two further implications:
Of course, variants of such formulas can also be derived that have the double-negations in the antecedent.
A special case of the first formula here is and this is indeed stronger than the -direction of the equivalence bullet point listed above. For simplicity of the discussion here and below, the formulas are generally presented in weakened forms without all possible insertions of double-negations in the antecedents.
More general variants hold. Incorporating the predicate and currying, the following generalization also entails the relation between implication and conjunction in the predicate calculus, discussed below.
If the predicate is decidedly false for all , then this equivalence is trivial. If is decidedly true for all , the schema simply reduces to the previously stated equivalence. In the language of classes, and , the special case of this equivalence with false equates two characterizations of disjointness :
Disjunction vs. conjunction
There are finite variations of the quantifier formulas, with just two propositions:
The first principle cannot be reversed: Considering for would imply the weak excluded middle, i.e. the statement . But intuitionistic logic alone does not even prove . So in particular, there is no distributivity principle for negations deriving the claim from . For an informal example of the constructive reading, consider the following: From conclusive evidence it not to be the case that both Alice and Bob showed up to their date, one cannot derive conclusive evidence, tied to either of the two persons, that this person did not show up. Negated propositions are comparably weak, in that the classically valid De Morgan's law, granting a disjunction from a single negative hypothetical, does not automatically hold constructively. The intuitionistic propositional calculus and some of its extensions exhibit the disjunction property instead, implying one of the disjuncts of any disjunction individually would have to be derivable as well.
The converse variants of those two, and the equivalent variants with double-negated antecedents, had already been mentioned above. Implications towards the negation of a conjunction can often be proven directly from the non-contradiction principle. In this way one may also obtain the mixed form of the implications, e.g. . Concatenating the theorems, we also find
The reverse cannot be provable, as it would prove weak excluded middle.
In predicate logic, the constant domain principle is not valid: does not imply the stronger . The distributive properties does however hold for any finite number of propositions. For a variant of the De Morgan law concerning two existentially closed decidable predicates, see LLPO.
Conjunction vs. implication
From the general equivalence also follows import-export, expressing incompatibility of two predicates using two different connectives:
Due to the symmetry of the conjunction connective, this again implies the already established .
The equivalence formula for the negated conjunction may be understood as a special case of currying and uncurrying. Many more considerations regarding double-negations again apply. And both non-reversible theorems relating conjunction and implication mentioned in the introduction follow from this equivalence. One is a converse, and holds simply because is stronger than .
Now when using the principle in the next section, the following variant of the latter, with more negations on the left, also holds:
A consequence is that
Disjunction vs. implication
Already minimal logic proves excluded middle equivalent to consequentia mirabilis, an instance of Peirce's law.
Now akin to modus ponens, clearly already in minimal logic, which is a theorem that does not even involve negations. In classical logic, this implication is in fact an equivalence. With taking to be of the form , excluded middle together with explosion is seen to entail Peirce's law.
In intuitionistic logic, one obtains variants of the stated theorem involving , as follows. Firstly, note that two different formulas for mentioned above can be used to imply . The latter are forms of the disjunctive syllogism for negated propositions, . A strengthened form still holds in intuitionistic logic:
As in previous sections, the positions of and may be switched, giving a stronger principle than the one mentioned in the introduction. So, for example, intuitionistically "Either or " is a stronger propositional formula than "If not , then ", whereas these are classically interchangeable. The implication cannot generally be reversed, as that immediately implies excluded middle.
Non-contradiction and explosion together also prove the stronger variant . And this shows how excluded middle for implies double-negation elimination for it. For a fixed , this implication cannot generally be reversed. However, as is always constructively valid, it follows that assuming double-negation elimination for all such disjunctions implies classical logic also.
Of course the formulas established here may be combined to obtain yet more variations. For example, the disjunctive syllogism as presented generalizes to
If some term exists at all, the antecedent here even implies , which in turn itself also implies the conclusion here (this is again the very first formula mentioned in this section).
The bulk of the discussion in these sections applies just as well to just minimal logic. But as for the disjunctive syllogism with general , minimal logic can at most prove where denotes . The conclusion here can only be simplified to using explosion.
Equivalences
The above lists also contain equivalences.
The equivalence involving a conjunction and a disjunction stems from actually being stronger than . Both sides of the equivalence can be understood as conjunctions of independent implications. Above, absurdity is used for . In functional interpretations, it corresponds to if-clause constructions.
So e.g. "Not ( or )" is equivalent to "Not , and also not ".
An equivalence itself is generally defined as, and then equivalent to, a conjunction () of implications (), as follows:
With it, such connectives become in turn definable from it:
In turn, and are complete bases of intuitionistic connectives, for example.
Functionally complete connectives
As shown by Alexander V. Kuznetsov, either of the following connectives – the first one ternary, the second one quinary – is by itself functionally complete: either one can serve the role of a sole sufficient operator for intuitionistic propositional logic, thus forming an analog of the Sheffer stroke from classical propositional logic:
Semantics
The semantics are rather more complicated than for the classical case. A model theory can be given by Heyting algebras or, equivalently, by Kripke semantics. In 2014, a Tarski-like model theory was proved complete by Bob Constable, but with a different notion of completeness than classically.
Unproved statements in intuitionistic logic are not given an intermediate truth value (as is sometimes mistakenly asserted). One can prove that such statements have no third truth value, a result dating back to Glivenko in 1928. Instead they remain of unknown truth value, until they are either proved or disproved. Statements are disproved by deducing a contradiction from them.
A consequence of this point of view is that intuitionistic logic has no interpretation as a two-valued logic, nor even as a finite-valued logic, in the familiar sense. Although intuitionistic logic retains the trivial propositions from classical logic, each proof of a propositional formula is considered a valid propositional value, thus by Heyting's notion of propositions-as-sets, propositional formulae are (potentially non-finite) sets of their proofs.
Heyting algebra semantics
In classical logic, we often discuss the truth values that a formula can take. The values are usually chosen as the members of a Boolean algebra. The meet and join operations in the Boolean algebra are identified with the ∧ and ∨ logical connectives, so that the value of a formula of the form A ∧ B is the meet of the value of A and the value of B in the Boolean algebra. Then we have the useful theorem that a formula is a valid proposition of classical logic if and only if its value is 1 for every valuation—that is, for any assignment of values to its variables.
A corresponding theorem is true for intuitionistic logic, but instead of assigning each formula a value from a Boolean algebra, one uses values from a Heyting algebra, of which Boolean algebras are a special case. A formula is valid in intuitionistic logic if and only if it receives the value of the top element for any valuation on any Heyting algebra.
It can be shown that to recognize valid formulas, it is sufficient to consider a single Heyting algebra whose elements are the open subsets of the real line R. In this algebra we have:
where int(X) is the interior of X and X∁ its complement.
The last identity concerning A → B allows us to calculate the value of ¬A:
With these assignments, intuitionistically valid formulas are precisely those that are assigned the value of the entire line. For example, the formula ¬(A ∧ ¬A) is valid, because no matter what set X is chosen as the value of the formula A, the value of ¬(A ∧ ¬A) can be shown to be the entire line:
So the valuation of this formula is true, and indeed the formula is valid. But the law of the excluded middle, A ∨ ¬A, can be shown to be invalid by using a specific value of the set of positive real numbers for A:
The interpretation of any intuitionistically valid formula in the infinite Heyting algebra described above results in the top element, representing true, as the valuation of the formula, regardless of what values from the algebra are assigned to the variables of the formula. Conversely, for every invalid formula, there is an assignment of values to the variables that yields a valuation that differs from the top element. No finite Heyting algebra has the second of these two properties.
Kripke semantics
Building upon his work on semantics of modal logic, Saul Kripke created another semantics for intuitionistic logic, known as Kripke semantics or relational semantics.
Tarski-like semantics
It was discovered that Tarski-like semantics for intuitionistic logic were not possible to prove complete. However, Robert Constable has shown that a weaker notion of completeness still holds for intuitionistic logic under a Tarski-like model. In this notion of completeness we are concerned not with all of the statements that are true of every model, but with the statements that are true in the same way in every model. That is, a single proof that the model judges a formula to be true must be valid for every model. In this case, there is not only a proof of completeness, but one that is valid according to intuitionistic logic.
Metalogic
Admissible rules
In intuitionistic logic or a fixed theory using the logic, the situation can occur that an implication always hold metatheoretically, but not in the language. For example, in the pure propositional calculus, if is provable, then so is . Another example is that being provable always also means that so is . One says the system is closed under these implications as rules and they may be adopted.
Theories' features
Theories over constructive logics can exhibit the disjunction property. The pure intuitionistic propositional calculus does so as well.
In particular, it means the excluded middle disjunction for an un-rejectable statement is provable exactly when is provable.
This also means, for examples, that the excluded middle disjunction for some the excluded middle disjunctions are not provable also.
Relation to other logics
Paraconsistent logic
Intuitionistic logic is related by duality to a paraconsistent logic known as Brazilian, anti-intuitionistic or dual-intuitionistic logic.
The subsystem of intuitionistic logic with the FALSE (resp. NOT-2) axiom removed is known as minimal logic and some differences have been elaborated on above.
Intermediate logics
In 1932, Kurt Gödel defined a system of logics intermediate between classical and intuitionistic logic. Indeed, any finite Heyting algebra that is not equivalent to a Boolean algebra defines (semantically) an intermediate logic. On the other hand, validity of formulae in pure intuitionistic logic is not tied to any individual Heyting algebra but relates to any and all Heyting algebras at the same time.
So for example, for a schema not involving negations, consider the classically valid . Adopting this over intuitionistic logic gives the intermediate logic called Gödel-Dummett logic.
Relation to classical logic
The system of classical logic is obtained by adding any one of the following axioms:
(Law of the excluded middle)
(Double negation elimination)
(Consequentia mirabilis, see also Peirce's law)
Various reformulations, or formulations as schemata in two variables (e.g. Peirce's law), also exist. One notable one is the (reverse) law of contraposition
Such are detailed on the intermediate logics article.
In general, one may take as the extra axiom any classical tautology that is not valid in the two-element Kripke frame (in other words, that is not included in Smetanich's logic).
Many-valued logic
Kurt Gödel's work involving many-valued logic showed in 1932 that intuitionistic logic is not a finite-valued logic. (See the section titled Heyting algebra semantics above for an infinite-valued logic interpretation of intuitionistic logic.)
Modal logic
Any formula of the intuitionistic propositional logic (IPC) may be translated into the language of the normal modal logic S4 as follows:
and it has been demonstrated that the translated formula is valid in the propositional modal logic S4 if and only if the original formula is valid in IPC. The above set of formulae are called the Gödel–McKinsey–Tarski translation.
There is also an intuitionistic version of modal logic S4 called Constructive Modal Logic CS4.
Lambda calculus
There is an extended Curry–Howard isomorphism between IPC and simply-typed lambda calculus.
| Mathematics | Mathematical logic | null |
169270 | https://en.wikipedia.org/wiki/Macrophage | Macrophage | Macrophages (; abbreviated Mφ, MΦ or MP) are a type of white blood cell of the innate immune system that engulf and digest pathogens, such as cancer cells, microbes, cellular debris and foreign substances, which do not have proteins that are specific to healthy body cells on their surface. This process is called phagocytosis, which acts to defend the host against infection and injury.
Macrophages are found in essentially all tissues, where they patrol for potential pathogens by amoeboid movement. They take various forms (with various names) throughout the body (e.g., histiocytes, Kupffer cells, alveolar macrophages, microglia, and others), but all are part of the mononuclear phagocyte system. Besides phagocytosis, they play a critical role in nonspecific defense (innate immunity) and also help initiate specific defense mechanisms (adaptive immunity) by recruiting other immune cells such as lymphocytes. For example, they are important as antigen presenters to T cells. In humans, dysfunctional macrophages cause severe diseases such as chronic granulomatous disease that result in frequent infections.
Beyond increasing inflammation and stimulating the immune system, macrophages also play an important anti-inflammatory role and can decrease immune reactions through the release of cytokines. Macrophages that encourage inflammation are called M1 macrophages, whereas those that decrease inflammation and encourage tissue repair are called M2 macrophages. This difference is reflected in their metabolism; M1 macrophages have the unique ability to metabolize arginine to the "killer" molecule nitric oxide, whereas M2 macrophages have the unique ability to metabolize arginine to the "repair" molecule ornithine. However, this dichotomy has been recently questioned as further complexity has been discovered.
Human macrophages are about in diameter and are produced by the differentiation of monocytes in tissues. They can be identified using flow cytometry or immunohistochemical staining by their specific expression of proteins such as CD14, CD40, CD11b, CD64, F4/80 (mice)/EMR1 (human), lysozyme M, MAC-1/MAC-3 and CD68.
Macrophages were first discovered and named by Élie Metchnikoff, a Russian Empire zoologist, in 1884.
Structure
Types
A majority of macrophages are stationed at strategic points where microbial invasion or accumulation of foreign particles is likely to occur. These cells together as a group are known as the mononuclear phagocyte system and were previously known as the reticuloendothelial system. Each type of macrophage, determined by its location, has a specific name:
Investigations concerning Kupffer cells are hampered because in humans, Kupffer cells are only accessible for immunohistochemical analysis from biopsies or autopsies. From rats and mice, they are difficult to isolate, and after purification, only approximately 5 million cells can be obtained from one mouse.
Macrophages can express paracrine functions within organs that are specific to the function of that organ. In the testis, for example, macrophages have been shown to be able to interact with Leydig cells by secreting 25-hydroxycholesterol, an oxysterol that can be converted to testosterone by neighbouring Leydig cells. Also, testicular macrophages may participate in creating an immune privileged environment in the testis, and in mediating infertility during inflammation of the testis.
Cardiac resident macrophages participate in electrical conduction via gap junction communication with cardiac myocytes.
Macrophages can be classified on basis of the fundamental function and activation. According to this grouping, there are classically activated (M1) macrophages, wound-healing macrophages (also known as alternatively-activated (M2) macrophages), and regulatory macrophages (Mregs).
Development
Macrophages that reside in adult healthy tissues either derive from circulating monocytes or are established before birth and then maintained during adult life independently of monocytes. By contrast, most of the macrophages that accumulate at diseased sites typically derive from circulating monocytes. Leukocyte extravasation describes monocyte entry into damaged tissue through the endothelium of blood vessels as they become macrophages. Monocytes are attracted to a damaged site by chemical substances through chemotaxis, triggered by a range of stimuli including damaged cells, pathogens and cytokines released by macrophages already at the site. At some sites such as the testis, macrophages have been shown to populate the organ through proliferation. Unlike short-lived neutrophils, macrophages survive longer in the body, up to several months.
Function
Phagocytosis
Macrophages are professional phagocytes and are highly specialized in removal of dying or dead cells and cellular debris. This role is important in chronic inflammation, as the early stages of inflammation are dominated by neutrophils, which are ingested by macrophages if they come of age (see CD31 for a description of this process).
The neutrophils are at first attracted to a site, where they perform their function and die, before they or their neutrophil extracellular traps are phagocytized by the macrophages. When at the site, the first wave of neutrophils, after the process of aging and after the first 48 hours, stimulate the appearance of the macrophages whereby these macrophages will then ingest the aged neutrophils.
The removal of dying cells is, to a greater extent, handled by fixed macrophages, which will stay at strategic locations such as the lungs, liver, neural tissue, bone, spleen and connective tissue, ingesting foreign materials such as pathogens and recruiting additional macrophages if needed.
When a macrophage ingests a pathogen, the pathogen becomes trapped in a phagosome, which then fuses with a lysosome. Within the phagolysosome, enzymes and toxic peroxides digest the pathogen. However, some bacteria, such as Mycobacterium tuberculosis, have become resistant to these methods of digestion. Typhoidal Salmonellae induce their own phagocytosis by host macrophages in vivo, and inhibit digestion by lysosomal action, thereby using macrophages for their own replication and causing macrophage apoptosis. Macrophages can digest more than 100 bacteria before they finally die due to their own digestive compounds.
Role in innate immune response
When a pathogen invades, tissue resident macrophages are among the first cells to respond. Two of the main roles of the tissue resident macrophages are to phagocytose incoming antigen and to secrete proinflammatory cytokines that induce inflammation and recruit other immune cells to the site.
Phagocytosis of pathogens
Macrophages can internalize antigens through receptor-mediated phagocytosis. Macrophages have a wide variety of pattern recognition receptors (PRRs) that can recognize microbe-associated molecular patterns (MAMPs) from pathogens. Many PRRs, such as toll-like receptors (TLRs), scavenger receptors (SRs), C-type lectin receptors, among others, recognize pathogens for phagocytosis. Macrophages can also recognize pathogens for phagocytosis indirectly through opsonins, which are molecules that attach to pathogens and mark them for phagocytosis. Opsonins can cause a stronger adhesion between the macrophage and pathogen during phagocytosis, hence opsonins tend to enhance macrophages’ phagocytic activity. Both complement proteins and antibodies can bind to antigens and opsonize them. Macrophages have complement receptor 1 (CR1) and 3 (CR3) that recognize pathogen-bound complement proteins C3b and iC3b, respectively, as well as fragment crystallizable γ receptors (FcγRs) that recognize the fragment crystallizable (Fc) region of antigen-bound immunoglobulin G (IgG) antibodies. When phagocytosing and digesting pathogens, macrophages go through a respiratory burst where more oxygen is consumed to supply the energy required for producing reactive oxygen species (ROS) and other antimicrobial molecules that digest the consumed pathogens.
Chemical secretion
Recognition of MAMPs by PRRs can activate tissue resident macrophages to secrete proinflammatory cytokines that recruit other immune cells. Among the PRRs, TLRs play a major role in signal transduction leading to cytokine production. The binding of MAMPs to TLR triggers a series of downstream events that eventually activates transcription factor NF-κB and results in transcription of the genes for several proinflammatory cytokines, including IL-1β, IL-6, TNF-α, IL-12B, and type I interferons such as IFN-α and IFN-β. Systemically, IL-1β, IL-6, and TNF-α induce fever and initiate the acute phase response in which the liver secretes acute phase proteins. Locally, IL-1β and TNF-α cause vasodilation, where the gaps between blood vessel epithelial cells widen, and upregulation of cell surface adhesion molecules on epithelial cells to induce leukocyte extravasation. Additionally, activated macrophages have been found to have delayed synthesis of prostaglandins (PGs) which are important mediators of inflammation and pain. Among the PGs, anti-inflammatory PGE2 and pro-inflammatory PGD2 increase the most after activation, with PGE2 increasing expression of IL-10 and inhibiting production of TNFs via the COX-2 pathway.
Neutrophils are among the first immune cells recruited by macrophages to exit the blood via extravasation and arrive at the infection site. Macrophages secrete many chemokines such as CXCL1, CXCL2, and CXCL8 (IL-8) that attract neutrophils to the site of infection. After neutrophils have finished phagocytosing and clearing the antigen at the end of the immune response, they undergo apoptosis, and macrophages are recruited from blood monocytes to help clear apoptotic debris.
Macrophages also recruit other immune cells such as monocytes, dendritic cells, natural killer cells, basophils, eosinophils, and T cells through chemokines such as CCL2, CCL4, CCL5, CXCL8, CXCL9, CXCL10, and CXCL11. Along with dendritic cells, macrophages help activate natural killer (NK) cells through secretion of type I interferons (IFN-α and IFN-β) and IL-12. IL-12 acts with IL-18 to stimulate the production of proinflammatory cytokine interferon gamma (IFN-γ) by NK cells, which serves as an important source of IFN-γ before the adaptive immune system is activated. IFN-γ enhances the innate immune response by inducing a more aggressive phenotype in macrophages, allowing macrophages to more efficiently kill pathogens.
Some of the T cell chemoattractants secreted by macrophages include CCL5, CXCL9, CXCL10, and CXCL11.
Role in adaptive immunity
Interactions with CD4+ T Helper Cells
Macrophages are professional antigen presenting cells (APC), meaning they can present peptides from phagocytosed antigens on major histocompatibility complex (MHC) II molecules on their cell surface for T helper cells. Macrophages are not primary activators of naïve T helper cells that have never been previously activated since tissue resident macrophages do not travel to the lymph nodes where naïve T helper cells reside. Although macrophages are also found in secondary lymphoid organs like the lymph nodes, they do not reside in T cell zones and are not effective at activating naïve T helper cells. The macrophages in lymphoid tissues are more involved in ingesting antigens and preventing them from entering the blood, as well as taking up debris from apoptotic lymphocytes. Therefore, macrophages interact mostly with previously activated T helper cells that have left the lymph node and arrived at the site of infection or with tissue resident memory T cells.
Macrophages supply both signals required for T helper cell activation: 1) Macrophages present antigen peptide-bound MHC class II molecule to be recognized by the corresponding T cell receptor (TCR), and 2) recognition of pathogens by PRRs induce macrophages to upregulate the co-stimulatory molecules CD80 and CD86 (also known as B7) that binds to CD28 on T helper cells to supply the co-stimulatory signal. These interactions allow T helper cells to achieve full effector function and provide T helper cells with continued survival and differentiation signals preventing them from undergoing apoptosis due to lack of TCR signaling. For example, IL-2 signaling in T cells upregulates the expression of anti-apoptotic protein Bcl-2, but T cell production of IL-2 and the high-affinity IL-2 receptor IL-2RA both require continued signal from TCR recognition of MHC-bound antigen.
Activation
Macrophages can achieve different activation phenotypes through interactions with different subsets of T helper cells, such as TH1 and TH2. Although there is a broad spectrum of macrophage activation phenotypes, there are two major phenotypes that are commonly acknowledged. They are the classically activated macrophages, or M1 macrophages, and the alternatively activated macrophages, or M2 macrophages. M1 macrophages are proinflammatory, while M2 macrophages are mostly anti-inflammatory.
Classical
TH1 cells play an important role in classical macrophage activation as part of type 1 immune response against intracellular pathogens (such as intracellular bacteria) that can survive and replicate inside host cells, especially those pathogens that replicate even after being phagocytosed by macrophages. After the TCR of TH1 cells recognize specific antigen peptide-bound MHC class II molecules on macrophages, TH1 cells 1) secrete IFN-γ and 2) upregulate the expression of CD40 ligand (CD40L), which binds to CD40 on macrophages. These 2 signals activate the macrophages and enhance their ability to kill intracellular pathogens through increased production of antimicrobial molecules such as nitric oxide (NO) and superoxide (O2-). This enhancement of macrophages' antimicrobial ability by TH1 cells is known as classical macrophage activation, and the activated macrophages are known as classically activated macrophages, or M1 macrophages. The M1 macrophages in turn upregulate B7 molecules and antigen presentation through MHC class II molecules to provide signals that sustain T cell help. The activation of TH1 and M1 macrophage is a positive feedback loop, with IFN-γ from TH1 cells upregulating CD40 expression on macrophages; the interaction between CD40 on the macrophages and CD40L on T cells activate macrophages to secrete IL-12; and IL-12 promotes more IFN-γ secretion from TH1 cells. The initial contact between macrophage antigen-bound MHC II and TCR serves as the contact point between the two cells where most of the IFN-γ secretion and CD-40L on T cells concentrate to, so only macrophages directly interacting with TH1 cells are likely to be activated.
In addition to activating M1 macrophages, TH1 cells express Fas ligand (FasL) and lymphotoxin beta (LT-β) to help kill chronically infected macrophages that can no longer kill pathogens. The killing of chronically infected macrophages release pathogens to the extracellular space that can then be killed by other activated macrophages. TH1 cells also help recruit more monocytes, the precursor to macrophages, to the infection site. TH1 secretion TNF-α and LT-α to make blood vessels easier for monocytes to bind to and exit. TH1 secretion of CCL2 as a chemoattractant for monocytes. IL-3 and GM-CSF released by TH1 cells stimulate more monocyte production in the bone marrow.
When intracellular pathogens cannot be eliminated, such as in the case of Mycobacterium tuberculosis, the pathogen is contained through the formation of granuloma, an aggregation of infected macrophages surrounded by activated T cells. The macrophages bordering the activated lymphocytes often fuse to form multinucleated giant cells that appear to have increased antimicrobial ability due to their proximity to TH1 cells, but over time, the cells in the center start to die and form necrotic tissue.
Alternative
TH2 cells play an important role in alternative macrophage activation as part of type 2 immune response against large extracellular pathogens like helminths. TH2 cells secrete IL-4 and IL-13, which activate macrophages to become M2 macrophages, also known as alternatively activated macrophages. M2 macrophages express arginase-1, an enzyme that converts arginine to ornithine and urea. Ornithine help increase smooth muscle contraction to expel the worm and also participates in tissue and wound repair. Ornithine can be further metabolized to proline, which is essential for synthesizing collagen. M2 macrophages can also decrease inflammation by producing IL-1 receptor antagonist (IL-1RA) and IL-1 receptors that do not lead to downstream inflammatory signaling (IL-1RII).
Interactions with CD8+ cytotoxic t cells
Another part of the adaptive immunity activation involves stimulating CD8+ via cross presentation of antigens peptides on MHC class I molecules. Studies have shown that proinflammatory macrophages are capable of cross presentation of antigens on MHC class I molecules, but whether macrophage cross-presentation plays a role in naïve or memory CD8+ T cell activation is still unclear.
Interactions with B cells
Macrophages have been shown to secrete cytokines BAFF and APRIL, which are important for plasma cell isotype switching. APRIL and IL-6 secreted by macrophage precursors in the bone marrow help maintain survival of plasma cells homed to the bone marrow.
Subtypes
There are several activated forms of macrophages. In spite of a spectrum of ways to activate macrophages, there are two main groups designated M1 and M2. M1 macrophages: as mentioned earlier (previously referred to as classically activated macrophages), M1 "killer" macrophages are activated by LPS and IFN-gamma, and secrete high levels of IL-12 and low levels of IL-10. M1 macrophages have pro-inflammatory, bactericidal, and phagocytic functions. In contrast, the M2 "repair" designation (also referred to as alternatively activated macrophages) broadly refers to macrophages that function in constructive processes like wound healing and tissue repair, and those that turn off damaging immune system activation by producing anti-inflammatory cytokines like IL-10. M2 is the phenotype of resident tissue macrophages, and can be further elevated by IL-4. M2 macrophages produce high levels of IL-10, TGF-beta and low levels of IL-12. Tumor-associated macrophages are mainly of the M2 phenotype, and seem to actively promote tumor growth.
Macrophages exist in a variety of phenotypes which are determined by the role they play in wound maturation. Phenotypes can be predominantly separated into two major categories; M1 and M2. M1 macrophages are the dominating phenotype observed in the early stages of inflammation and are activated by four key mediators: interferon-γ (IFN-γ), tumor necrosis factor (TNF), and damage associated molecular patterns (DAMPs). These mediator molecules create a pro-inflammatory response that in return produce pro-inflammatory cytokines like Interleukin-6 and TNF. Unlike M1 macrophages, M2 macrophages secrete an anti-inflammatory response via the addition of Interleukin-4 or Interleukin-13. They also play a role in wound healing and are needed for revascularization and reepithelialization. M2 macrophages are divided into four major types based on their roles: M2a, M2b, M2c, and M2d. How M2 phenotypes are determined is still up for discussion but studies have shown that their environment allows them to adjust to whichever phenotype is most appropriate to efficiently heal the wound.
M2 macrophages are needed for vascular stability. They produce vascular endothelial growth factor-A and TGF-β1. There is a phenotype shift from M1 to M2 macrophages in acute wounds, however this shift is impaired for chronic wounds. This dysregulation results in insufficient M2 macrophages and its corresponding growth factors that aid in wound repair. With a lack of these growth factors/anti-inflammatory cytokines and an overabundance of pro-inflammatory cytokines from M1 macrophages chronic wounds are unable to heal in a timely manner. Normally, after neutrophils eat debris/pathogens they perform apoptosis and are removed. At this point, inflammation is not needed and M1 undergoes a switch to M2 (anti-inflammatory). However, dysregulation occurs as the M1 macrophages are unable/do not phagocytose neutrophils that have undergone apoptosis leading to increased macrophage migration and inflammation.
Both M1 and M2 macrophages play a role in promotion of atherosclerosis. M1 macrophages promote atherosclerosis by inflammation. M2 macrophages can remove cholesterol from blood vessels, but when the cholesterol is oxidized, the M2 macrophages become apoptotic foam cells contributing to the atheromatous plaque of atherosclerosis.
Role in muscle regeneration
The first step to understanding the importance of macrophages in muscle repair, growth, and regeneration is that there are two "waves" of macrophages with the onset of damageable muscle use– subpopulations that do and do not directly have an influence on repairing muscle. The initial wave is a phagocytic population that comes along during periods of increased muscle use that are sufficient to cause muscle membrane lysis and membrane inflammation, which can enter and degrade the contents of injured muscle fibers. These early-invading, phagocytic macrophages reach their highest concentration about 24 hours following the onset of some form of muscle cell injury or reloading. Their concentration rapidly declines after 48 hours. The second group is the non-phagocytic types that are distributed near regenerative fibers. These peak between two and four days and remain elevated for several days while muscle tissue is rebuilding. The first subpopulation has no direct benefit to repairing muscle, while the second non-phagocytic group does.
It is thought that macrophages release soluble substances that influence the proliferation, differentiation, growth, repair, and regeneration of muscle, but at this time the factor that is produced to mediate these effects is unknown. It is known that macrophages' involvement in promoting tissue repair is not muscle specific; they accumulate in numerous tissues during the healing process phase following injury.
Role in wound healing
Macrophages are essential for wound healing. They replace polymorphonuclear neutrophils as the predominant cells in the wound by day two after injury. Attracted to the wound site by growth factors released by platelets and other cells, monocytes from the bloodstream enter the area through blood vessel walls. Numbers of monocytes in the wound peak one to one and a half days after the injury occurs. Once they are in the wound site, monocytes mature into macrophages. The spleen contains half the body's monocytes in reserve ready to be deployed to injured tissue.
The macrophage's main role is to phagocytize bacteria and damaged tissue, and they also debride damaged tissue by releasing proteases. Macrophages also secrete a number of factors such as growth factors and other cytokines, especially during the third and fourth post-wound days. These factors attract cells involved in the proliferation stage of healing to the area. Macrophages may also restrain the contraction phase. Macrophages are stimulated by the low oxygen content of their surroundings to produce factors that induce and speed angiogenesis and they also stimulate cells that re-epithelialize the wound, create granulation tissue, and lay down a new extracellular matrix. By secreting these factors, macrophages contribute to pushing the wound healing process into the next phase.
Role in limb regeneration
Scientists have elucidated that as well as eating up material debris, macrophages are involved in the typical limb regeneration in the salamander. They found that removing the macrophages from a salamander resulted in failure of limb regeneration and a scarring response.
Role in iron homeostasis
As described above, macrophages play a key role in removing dying or dead cells and cellular debris. Erythrocytes have a lifespan on average of 120 days and so are constantly being destroyed by macrophages in the spleen and liver. Macrophages will also engulf macromolecules, and so play a key role in the pharmacokinetics of parenteral irons.
The iron that is released from the haemoglobin is either stored internally in ferritin or is released into the circulation via ferroportin. In cases where systemic iron levels are raised, or where inflammation is present, raised levels of hepcidin act on macrophage ferroportin channels, leading to iron remaining within the macrophages.
Role in pigment retainment
Melanophages are a subset of tissue-resident macrophages able to absorb pigment, either native to the organism or exogenous (such as tattoos), from extracellular space. In contrast to dendritic juncional melanocytes, which synthesize melanosomes and contain various stages of their development, the melanophages only accumulate phagocytosed melanin in lysosome-like phagosomes. This occurs repeatedly as the pigment from dead dermal macrophages is phagocytosed by their successors, preserving the tattoo in the same place.
Role in tissue homeostasis
Every tissue harbors its own specialized population of resident macrophages, which entertain reciprocal interconnections with the stroma and functional tissue. These resident macrophages are sessile (non-migratory), provide essential growth factors to support the physiological function of the tissue (e.g. macrophage-neuronal crosstalk in the guts), and can actively protect the tissue from inflammatory damage.
Nerve-associated macrophages
Nerve-associated macrophages or NAMs are those tissue-resident macrophages that are associated with nerves. Some of them are known to have an elongated morphology of up to 200μm
Clinical significance
Due to their role in phagocytosis, macrophages are involved in many diseases of the immune system. For example, they participate in the formation of granulomas, inflammatory lesions that may be caused by a large number of diseases. Some disorders, mostly rare, of ineffective phagocytosis and macrophage function have been described, for example.
As a host for intracellular pathogens
In their role as a phagocytic immune cell macrophages are responsible for engulfing pathogens to destroy them. Some pathogens subvert this process and instead live inside the macrophage. This provides an environment in which the pathogen is hidden from the immune system and allows it to replicate.
Diseases with this type of behaviour include tuberculosis (caused by Mycobacterium tuberculosis) and leishmaniasis (caused by Leishmania species).
In order to minimize the possibility of becoming the host of an intracellular bacteria, macrophages have evolved defense mechanisms such as induction of nitric oxide and reactive oxygen intermediates, which are toxic to microbes. Macrophages have also evolved the ability to restrict the microbe's nutrient supply and induce autophagy.
Tuberculosis
Once engulfed by a macrophage, the causative agent of tuberculosis, Mycobacterium tuberculosis, avoids cellular defenses and uses the cell to replicate. Recent evidence suggests that in response to the pulmonary infection of Mycobacterium tuberculosis, the peripheral macrophages matures into M1 phenotype. Macrophage M1 phenotype is characterized by increased secretion of pro-inflammatory cytokines (IL-1β, TNF-α, and IL-6) and increased glycolytic activities essential for clearance of infection.
Leishmaniasis
Upon phagocytosis by a macrophage, the Leishmania parasite finds itself in a phagocytic vacuole. Under normal circumstances, this phagocytic vacuole would develop into a lysosome and its contents would be digested. Leishmania alter this process and avoid being destroyed; instead, they make a home inside the vacuole.
Chikungunya
Infection of macrophages in joints is associated with local inflammation during and after the acute phase of Chikungunya (caused by CHIKV or Chikungunya virus).
Others
Adenovirus (most common cause of pink eye) can remain latent in a host macrophage, with continued viral shedding 6–18 months after initial infection.
Brucella spp. can remain latent in a macrophage via inhibition of phagosome–lysosome fusion; causes brucellosis (undulant fever).
Legionella pneumophila, the causative agent of Legionnaires' disease, also establishes residence within macrophages.
Heart disease
Macrophages are the predominant cells involved in creating the progressive plaque lesions of atherosclerosis.
Focal recruitment of macrophages occurs after the onset of acute myocardial infarction. These macrophages function to remove debris, apoptotic cells and to prepare for tissue regeneration. Macrophages protect against ischemia-induced ventricular tachycardia in hypokalemic mice.
HIV infection
Macrophages also play a role in human immunodeficiency virus (HIV) infection. Like T cells, macrophages can be infected with HIV, and even become a reservoir of ongoing virus replication throughout the body. HIV can enter the macrophage through binding of gp120 to CD4 and second membrane receptor, CCR5 (a chemokine receptor). Both circulating monocytes and macrophages serve as a reservoir for the virus. Macrophages are better able to resist infection by HIV-1 than CD4+ T cells, although susceptibility to HIV infection differs among macrophage subtypes.
Cancer
Macrophages can contribute to tumor growth and progression by promoting tumor cell proliferation and invasion, fostering tumor angiogenesis and suppressing antitumor immune cells. Inflammatory compounds, such as tumor necrosis factor (TNF)-alpha released by the macrophages activate the gene switch nuclear factor-kappa B. NF-κB then enters the nucleus of a tumor cell and turns on production of proteins that stop apoptosis and promote cell proliferation and inflammation. Moreover, macrophages serve as a source for many pro-angiogenic factors including vascular endothelial factor (VEGF), tumor necrosis factor-alpha (TNF-alpha), macrophage colony-stimulating factor (M-CSF/CSF1) and IL-1 and IL-6, contributing further to the tumor growth.
Macrophages have been shown to infiltrate a number of tumors. Their number correlates with poor prognosis in certain cancers, including cancers of breast, cervix, bladder, brain and prostate. Some tumors can also produce factors, including M-CSF/CSF1, MCP-1/CCL2 and Angiotensin II, that trigger the amplification and mobilization of macrophages in tumors. Additionally, subcapsular sinus macrophages in tumor-draining lymph nodes can suppress cancer progression by containing the spread of tumor-derived materials.
Cancer therapy
Experimental studies indicate that macrophages can affect all therapeutic modalities, including surgery, chemotherapy, radiotherapy, immunotherapy and targeted therapy. Macrophages can influence treatment outcomes both positively and negatively. Macrophages can be protective in different ways: they can remove dead tumor cells (in a process called phagocytosis) following treatments that kill these cells; they can serve as drug depots for some anticancer drugs; they can also be activated by some therapies to promote antitumor immunity. Macrophages can also be deleterious in several ways: for example they can suppress various chemotherapies, radiotherapies and immunotherapies. Because macrophages can regulate tumor progression, therapeutic strategies to reduce the number of these cells, or to manipulate their phenotypes, are currently being tested in cancer patients. However, macrophages are also involved in antibody mediated cytotoxicity (ADCC) and this mechanism has been proposed to be important for certain cancer immunotherapy antibodies. Similarly, studies identified macrophages genetically engineered to express chimeric antigen receptors as promising therapeutic approach to lowering tumor burden.
Obesity
It has been observed that increased number of pro-inflammatory macrophages within obese adipose tissue contributes to obesity complications including insulin resistance and diabetes type 2.
The modulation of the inflammatory state of adipose tissue macrophages has therefore been considered a possible therapeutic target to treat obesity-related diseases. Although adipose tissue macrophages are subject to anti-inflammatory homeostatic control by sympathetic innervation, experiments using ADRB2 gene knockout mice indicate that this effect is indirectly exerted through the modulation of adipocyte function, and not through direct Beta-2 adrenergic receptor activation, suggesting that adrenergic stimulation of macrophages may be insufficient to impact adipose tissue inflammation or function in obesity.
Within the fat (adipose) tissue of CCR2 deficient mice, there is an increased number of eosinophils, greater alternative macrophage activation, and a propensity towards type 2 cytokine expression. Furthermore, this effect was exaggerated when the mice became obese from a high fat diet. This is partially caused by a phenotype switch of macrophages induced by necrosis of fat cells (adipocytes). In an obese individual some adipocytes burst and undergo necrotic death, which causes the residential M2 macrophages to switch to M1 phenotype. This is one of the causes of a low-grade systemic chronic inflammatory state associated with obesity.
Intestinal macrophages
Though very similar in structure to tissue macrophages, intestinal macrophages have evolved specific characteristics and functions given their natural environment, which is in the digestive tract. Macrophages and intestinal macrophages have high plasticity causing their phenotype to be altered by their environments. Like macrophages, intestinal macrophages are differentiated monocytes, though intestinal macrophages have to coexist with the microbiome in the intestines. This is a challenge considering the bacteria found in the gut are not recognized as "self" and could be potential targets for phagocytosis by the macrophage.
To prevent the destruction of the gut bacteria, intestinal macrophages have developed key differences compared to other macrophages. Primarily, intestinal macrophages do not induce inflammatory responses. Whereas tissue macrophages release various inflammatory cytokines, such as IL-1, IL-6 and TNF-α, intestinal macrophages do not produce or secrete inflammatory cytokines. This change is directly caused by the intestinal macrophages environment. Surrounding intestinal epithelial cells release TGF-β, which induces the change from proinflammatory macrophage to noninflammatory macrophage.
Even though the inflammatory response is downregulated in intestinal macrophages, phagocytosis is still carried out. There is no drop off in phagocytosis efficiency as intestinal macrophages are able to effectively phagocytize the bacteria,S. typhimurium and E. coli, but intestinal macrophages still do not release cytokines, even after phagocytosis. Also, intestinal macrophages do not express lipopolysaccharide (LPS), IgA, or IgG receptors. The lack of LPS receptors is important for the gut as the intestinal macrophages do not detect the microbe-associated molecular patterns (MAMPS/PAMPS) of the intestinal microbiome. Nor do they express IL-2 and IL-3 growth factor receptors.
Role in disease
Intestinal macrophages have been shown to play a role in inflammatory bowel disease (IBD), such as Crohn's disease (CD) and ulcerative colitis (UC). In a healthy gut, intestinal macrophages limit the inflammatory response in the gut, but in a disease-state, intestinal macrophage numbers and diversity are altered. This leads to inflammation of the gut and disease symptoms of IBD. Intestinal macrophages are critical in maintaining gut homeostasis. The presence of inflammation or pathogen alters this homeostasis, and concurrently alters the intestinal macrophages. There has yet to be a determined mechanism for the alteration of the intestinal macrophages by recruitment of new monocytes or changes in the already present intestinal macrophages.
Additionally, a new study reveals macrophages limit iron access to bacteria by releasing extracellular vesicles, improving sepsis outcomes.
Media
History
Macrophages were first discovered late in the 19th century by zoologist Élie Metchnikoff. Metchnikoff revolutionized the branch of macrophages by combining philosophical insights and the evolutionary study of life. Later on, Van Furth during the 1960s proposed the idea that circulating blood monocytes in adults allowed for the origin of all tissue macrophages. In recent years, publishing regarding macrophages has led people to believe that multiple resident tissue macrophages are independent of the blood monocytes as it is formed during the embryonic stage of development. Within the 21st century, all the ideas concerning the origin of macrophages (present in tissues) were compiled together to suggest that physiologically complex organisms, from macrophages independently by mechanisms that don't have to depend on the blood monocytes.
| Biology and health sciences | Immune system | Biology |
169312 | https://en.wikipedia.org/wiki/Monoceros | Monoceros | Monoceros (Greek: , "unicorn") is a faint constellation on the celestial equator. Its definition is attributed to the 17th-century cartographer Petrus Plancius. It is bordered by Orion to the west, Gemini to the north, Canis Major to the south, and Hydra to the east. Other bordering constellations include Canis Minor, Lepus, and Puppis.
Features
Stars
Monoceros contains only a few fourth magnitude stars, making it difficult to see with the naked eye. Alpha Monocerotis has a visual magnitude of 3.93, while for Gamma Monocerotis it is 3.98.
Beta Monocerotis is a triple star system; the three stars form a fixed triangle. The visual magnitudes of the stars are 4.7, 5.2, and 6.1. William Herschel discovered it in 1781 and called it "one of the most beautiful sights in the heavens".
Epsilon Monocerotis is a fixed binary, with visual magnitudes of 4.5 and 6.5.
S Monocerotis, or 15 Monocerotis, is a bluish white variable star and is located at the center of NGC 2264. The variation in its magnitude is slight (4.2–4.6). It has a companion star of visual magnitude 8.
V838 Monocerotis, a variable red supergiant star, had an outburst starting on January 6, 2002; in February of that year, its brightness increased by a factor of 10,000 in one day. After the outburst was over, the Hubble Space Telescope was able to observe a light echo, which illuminated the dust surrounding the star.
Monoceros also contains Plaskett's Star, a massive binary system whose combined mass is estimated, per 2008 calculations, to be almost 100 solar masses.
Monoceros is the location of the binary system Scholz's Star, host to a red dwarf primary and brown dwarf secondary; the system performed a close flypast of the Solar System approximately 70,000 years ago, travelling within 120,000 astronomical units of the Sun within the Oort cloud.
One of the nearest known black holes to the Solar System is in this constellation. The binary star system A0620-00 in the constellation of Monoceros is at a distance of roughly 3,300 light-years (1,000 parsecs) away. The black hole is estimated to be 6.6 solar masses.
Planets
Monoceros contains two super-Earth exoplanets in one planetary system: CoRoT-7b was detected by the CoRoT satellite and CoRoT-7c was detected by the High Accuracy Radial Velocity Planet Searcher from ground-based telescopes. Until the announcement of Kepler-10b in January 2011, CoRoT-7b was the smallest exoplanet to have its diameter measured, at 1.58 times that of the Earth (which would give it a volume 3.95 times Earth's). Both planets in this system were discovered in 2009.
Deep-sky objects
Part of the galactic plane goes through Monoceros, so background galaxies are concealed by interstellar dust. Monoceros contains many clusters and nebulae; most notable among them are:
Messier 50, an open cluster
The Rosette Nebula (NGC 2237, 2238, 2239, and 2246) is a diffuse nebula in Monoceros. It has an overall magnitude of 6.0 and is 4900 light-years from Earth. The Rosette Nebula, over 100 light-years in diameter, has an associated star cluster and possesses many Bok globules in its dark areas. It was independently discovered in the 1880s by Lewis Swift (early 1880s) and Edward Emerson Barnard (1883) as they hunted for comets.
The Christmas Tree Cluster (NGC 2264) is another open cluster in Monoceros. Named for its resemblance to a Christmas tree, it is fairly bright at an overall magnitude of 3.9; it is 2400 light-years from Earth. The variable star S Monocerotis represents the tree's trunk, while the variable star V429 Monocerotis represents its top.
The Cone Nebula (NGC 2264), associated with the Christmas Tree Cluster, is a very dim nebula that contains a dark conic structure. It appears clearly in photographs, but is very elusive in a telescope. The nebula contains several Herbig–Haro objects, which are small irregularly variable nebulae. They are associated with protostars.
NGC 2254 is an open cluster with an overall magnitude of 9.7, 7100 light-years from Earth. It is a Shapley class f and Trumpler class I 2 p cluster, meaning that it appears to be a fairly rich cluster overall, though it has fewer than 50 stars. It appears distinct from the background star field and is very concentrated at its center; its stars range moderately in brightness.
Hubble's Variable Nebula (NGC 2261) is a nebula with an approximate magnitude of 10, 2500 light-years from Earth. It is named for Edwin Hubble, and was discovered in 1783 by Herschel. Hubble's Variable Nebula is illuminated by R Monocerotis, a young variable star embedded in the nebula; the star's unique interaction with the material in the nebula makes it both an emission nebula and a reflection nebula. One hypothesis regarding their interaction is that the nebula and its illuminating star are a very early stage planetary system.
IC 447, a reflection nebula.
History
In Western astronomy, Monoceros is a relatively modern constellation, not one of Ptolemy's 48 in the Almagest. Its first certain appearance was on a globe created by the cartographer Petrus Plancius in 1612 or 1613 and it was later charted by German astronomer Jakob Bartsch as Unicornu on his star chart of 1624.
German astronomers Heinrich Wilhelm Olbers and Ludwig Ideler
indicate (according to Richard Hinckley Allen's allegations) that the constellation may be older, quoting an astrological work
from 1564 that mentioned "the second horse between the Twins and the Crab has many stars, but not very bright"; these references may ultimately be due to the 13th century Scotsman Michael Scot, but refer to a horse and not a unicorn, and its position does not quite match. Joseph Scaliger (died 1609) is reported to have found Monoceros on an ancient Persian sphere. Astronomer Camille Flammarion (died 1925) believed that a former constellation, Neper (the "Auger"), occupied the part of that sky now deemed Monoceros and Microscopium, but this is disputed.
Chinese asterisms Sze Fūh, the Four Great Canals; Kwan Kew; and Wae Choo, the Outer Kitchen, all lay within the boundaries of Monoceros.
| Physical sciences | Other | Astronomy |
3051520 | https://en.wikipedia.org/wiki/Footballfish | Footballfish | The footballfish form a family, Himantolophidae, of globose, deep-sea anglerfishes found in tropical and subtropical waters of the Atlantic, Indian, and Pacific Ocean. The family contains 23 species, all of which are classified in a single genus, Himantolophus.
Taxonomy
The footballfish genus, Himantolophus was first proposed as a monospecific genus in 1837 by the Norwegian zoologist Johan Reinhardt when he described Himantolophus groenlandicus. Reinhardt gave the type locality of H. groenlandicus as being near Godthaab in Greenland where it had been washed ashore. In 1861 Theodore Gill placed Himantolophus in the new monotypic family Himantolophidae. The 5th edition of The 5th edition of Fishes of the World classifies this family in the suborder Ceratioidei of the anglerfish order Lophiiformes.
Etymology
The football fish family and genus names are derived from a combination of himantos, which means a "leather strap", "thong" or "leash", with lophus, meaning "crest" or "tuft". Reinhardt did not explain this name but it is thought to refer to the thick, leathery illicium of the type species, H. groenlandicus.
Species and species groups
There are currently 23 recognized species in this genus and these are divided into species groups as set out below.
groenlandicus group
Himantolophus crinitus Bertelsen & G. Krefft, 1988
Himantolophus danae Regan & Trewavas, 1932
Himantolophus groenlandicus Reinhardt, 1837 (Atlantic footballfish)
Himantolophus paucifilosus Bertelsen & G. Krefft, 1988
Himantolophus sagamius S. Tanaka (I), 1918 (Pacific footballfish)
appelii group
Himantolophus appelii F. E. Clarke, 1878 (Prickly anglerfish)
Himantolophus stewarti Pietsch & Kenaley, 2011
nigricornis group
Himantolophus melanolophus Bertelsen & G. Krefft, 1988
Himantolophus nigricornis Bertelsen & G. Krefft, 1988
albinares group
Himantolophus albinares Maul, 1961
Himantolophus borealis Kharin, 1984
Himantolophus kalami Rajeeshjumar, Pietsch & Saravanane, 2022
Himantolophus mauli Bertelsen & G. Krefft, 1988
Himantolophus multifurcatus Bertelsen & G. Krefft, 1988
Himantolophus pseudalbinares Bertelsen & G. Krefft, 1988
cornifer group
Himantolophus azurlucens Beebe & Crane, 1947
Himantolophus compressus Osório, 1912
Himantolophus cornifer Bertelsen & G. Krefft, 1988
Himantolophus litoceras A. L. Stewart & Pietsch, 2010
Himantolophus macroceras Bertelsen & G. Krefft, 1988
Himantolophus macroceratoides Bertelsen & G. Krefft, 1988
brevirostris group
Himantolophus brevirostris Regan, 1925
rostratus group
Himantolophus rostratus Regan, 1925
These groups were determined from the morphology of the metamorphosed females, except for brevirostris and rostratus which were determined from males only.
Characteristics
Footballfish are sexually dimorphic with the metamorphosed females and males being very different in appearance. The metamorphosed females are distinguished from other anglerfishes of the suborder Ceratioidei by having a well developed lower jaw which protrudes beyond the snout. They also have a wide vomer which has no teeth, well developed spines on the sphenotic bone, there is a covering of low, rounded papillae on the snout and chin and, at least in larger individuals, there are conical spines in the skin which are scattered over the head and body. The esca of footballfishes vary in size and morphology, to a greater extent than other deep sea anglerfishes. The metamorphosed males have a line of large spines above and behind the upper denticular bone, their eyes are directed to the sides and have moderately sized and they have a large olfactory system with sideways pointing nostrils. They have between 16 and 31 denticular teeth on the snout and between 20 and 50 on the chin, these teeth merge at thire bases to form the upper and lower denticular bones. Their skin has a dense covering of dermal spinules. The larvae are round with a swollen appearance to the skin with pectoral fins that do not extend beyond the dorsal and anal fins, the females have a small club-shaped rudimentary illicium. The males are considerably smaller than the females, for example in H. groenlandicus the maximum published standard length for a male is while that of a females is .
Distribution and habitat
Footballfishes are found in the mesopelagic and bathypelagic zones in the Atlantic, Pacific and Indian Oceans, as well as the Southern Ocean.
Biology
Footballfishes are one of the ceratioid groups in which the males are free living and non-parasitic on the females. The males use their highly developed olfactory organs to detect females, once they find a female they attach themselves to her but do not fuse with her to become parasitic. The eggs and larvae are pelagic. The specialised teeth on the denticular bones are used to temporarily attach the male to the female. There is a record of a female with a scar on her skin that was probably caused by a male that became detached.
At the depths at which these fishes live it is dark and food is sparse and rarely encountered. The female footballfish have bioluminescent bacteria in their escas and this is used to attract prey to within striking distance of the mouth. The prey is whatever they can fit into their mouths, and the backward curving teeth ensure that prey are unable to escape. Recorded prey includes fishes, squid and crustaceans.
| Biology and health sciences | Acanthomorpha | Animals |
3051962 | https://en.wikipedia.org/wiki/Homography | Homography | In projective geometry, a homography is an isomorphism of projective spaces, induced by an isomorphism of the vector spaces from which the projective spaces derive. It is a bijection that maps lines to lines, and thus a collineation. In general, some collineations are not homographies, but the fundamental theorem of projective geometry asserts that is not so in the case of real projective spaces of dimension at least two. Synonyms include projectivity, projective transformation, and projective collineation.
Historically, homographies (and projective spaces) have been introduced to study perspective and projections in Euclidean geometry, and the term homography, which, etymologically, roughly means "similar drawing", dates from this time. At the end of the 19th century, formal definitions of projective spaces were introduced, which extended Euclidean and affine spaces by the addition of new points called points at infinity. The term "projective transformation" originated in these abstract constructions. These constructions divide into two classes that have been shown to be equivalent. A projective space may be constructed as the set of the lines of a vector space over a given field (the above definition is based on this version); this construction facilitates the definition of projective coordinates and allows using the tools of linear algebra for the study of homographies. The alternative approach consists in defining the projective space through a set of axioms, which do not involve explicitly any field (incidence geometry, see also synthetic geometry); in this context, collineations are easier to define than homographies, and homographies are defined as specific collineations, thus called "projective collineations".
For sake of simplicity, unless otherwise stated, the projective spaces considered in this article are supposed to be defined over a (commutative) field. Equivalently Pappus's hexagon theorem and Desargues's theorem are supposed to be true. A large part of the results remain true, or may be generalized to projective geometries for which these theorems do not hold.
Geometric motivation
Historically, the concept of homography had been introduced to understand, explain and study visual perspective, and, specifically, the difference in appearance of two plane objects viewed from different points of view.
In three-dimensional Euclidean space, a central projection from a point O (the center) onto a plane P that does not contain O is the mapping that sends a point A to the intersection (if it exists) of the line OA and the plane P. The projection is not defined if the point A belongs to the plane passing through O and parallel to P. The notion of projective space was originally introduced by extending the Euclidean space, that is, by adding points at infinity to it, in order to define the projection for every point except O.
Given another plane Q, which does not contain O, the restriction to Q of the above projection is called a perspectivity.
With these definitions, a perspectivity is only a partial function, but it becomes a bijection if extended to projective spaces. Therefore, this notion is normally defined for projective spaces. The notion is also easily generalized to projective spaces of any dimension, over any field, in the following way:
If f is a perspectivity from P to Q, and g a perspectivity from Q to P, with a different center, then is a homography from P to itself, which is called a central collineation, when the dimension of P is at least two. (See below and .)
Originally, a homography was defined as the composition of a finite number of perspectivities. It is a part of the fundamental theorem of projective geometry (see below) that this definition coincides with the more algebraic definition sketched in the introduction and detailed below.
Definition and expression in homogeneous coordinates
A projective space P(V) of dimension n over a field K may be defined as the set of the lines through the origin in a K-vector space V of dimension . If a basis of V has been fixed, a point of V may be represented by a point of Kn+1. A point of P(V), being a line in V, may thus be represented by the coordinates of any nonzero point of this line, which are thus called homogeneous coordinates of the projective point.
Given two projective spaces P(V) and P(W) of the same dimension, a homography is a mapping from P(V) to P(W), which is induced by an isomorphism of vector spaces . Such an isomorphism induces a bijection from P(V) to P(W), because of the linearity of f. Two such isomorphisms, f and g, define the same homography if and only if there is a nonzero element a of K such that .
This may be written in terms of homogeneous coordinates in the following way: A homography φ may be defined by a nonsingular matrix [ai,j], called the matrix of the homography. This matrix is defined up to the multiplication by a nonzero element of K. The homogeneous coordinates of a point and the coordinates of its image by φ are related by
When the projective spaces are defined by adding points at infinity to affine spaces (projective completion) the preceding formulas become, in affine coordinates,
which generalizes the expression of the homographic function of the next section. This defines only a partial function between affine spaces, which is defined only outside the hyperplane where the denominator is zero.
Homographies of a projective line
The projective line over a field K may be identified with the union of K and a point, called the "point at infinity" and denoted by ∞ (see Projective line). With this representation of the projective line, the homographies are the mappings
which are called homographic functions or linear fractional transformations.
In the case of the complex projective line, which can be identified with the Riemann sphere, the homographies are called Möbius transformations.
These correspond precisely with those bijections of the Riemann sphere that preserve orientation and are conformal.
In the study of collineations, the case of projective lines is special due to the small dimension. When the line is viewed as a projective space in isolation, any permutation of the points of a projective line is a collineation, since every set of points are collinear. However, if the projective line is embedded in a higher-dimensional projective space, the geometric structure of that space can be used to impose a geometric structure on the line. Thus, in synthetic geometry, the homographies and the collineations of the projective line that are considered are those obtained by restrictions to the line of collineations and homographies of spaces of higher dimension. This means that the fundamental theorem of projective geometry (see below) remains valid in the one-dimensional setting. A homography of a projective line may also be properly defined by insisting that the mapping preserves cross-ratios.
Projective frame and coordinates
A projective frame or projective basis of a projective space of dimension is an ordered set of points such that no hyperplane contains of them. A projective frame is sometimes called a simplex, although a simplex in a space of dimension has at most vertices.
Projective spaces over a commutative field are considered in this section, although most results may be generalized to projective spaces over a division ring.
Let be a projective space of dimension , where is a K-vector space of dimension , and be the canonical projection that maps a nonzero vector to the vector line that contains it.
For every frame of , there exists a basis of V such that the frame is , and this basis is unique up to the multiplication of all its elements by the same nonzero element of K. Conversely, if is a basis of V, then is a frame of
It follows that, given two frames, there is exactly one homography mapping the first one onto the second one. In particular, the only homography fixing the points of a frame is the identity map. This result is much more difficult in synthetic geometry (where projective spaces are defined through axioms). It is sometimes called the first fundamental theorem of projective geometry.
Every frame allows to define projective coordinates, also known as homogeneous coordinates: every point may be written as ; the projective coordinates of on this frame are the coordinates of on the base . It is not difficult to verify that changing the e and , without changing the frame nor p(v), results in multiplying the projective coordinates by the same nonzero element of K.
The projective space has a canonical frame consisting of the image by of the canonical basis of (consisting of the elements having only one nonzero entry, which is equal to 1), and . On this basis, the homogeneous coordinates of are simply the entries (coefficients) of the tuple . Given another projective space of the same dimension, and a frame of it, there is one and only one homography mapping onto the canonical frame of . The projective coordinates of a point on the frame are the homogeneous coordinates of on the canonical frame of .
Central collineations
In above sections, homographies have been defined through linear algebra. In synthetic geometry, they are traditionally defined as the composition of one or several special homographies called central collineations. It is a part of the fundamental theorem of projective geometry that the two definitions are equivalent.
In a projective space, P, of dimension , a collineation of P is a bijection from P onto P that maps lines onto lines. A central collineation (traditionally these were called perspectivities, but this term may be confusing, having another meaning; see Perspectivity) is a bijection α from P to P, such that there exists a hyperplane H (called the axis of α), which is fixed pointwise by α (that is, for all points X in H) and a point O (called the center of α), which is fixed linewise by α (any line through O is mapped to itself by α, but not necessarily pointwise). There are two types of central collineations. Elations are the central collineations in which the center is incident with the axis and homologies are those in which the center is not incident with the axis. A central collineation is uniquely defined by its center, its axis, and the image α(P) of any given point P that differs from the center O and does not belong to the axis. (The image α(Q) of any other point Q is the intersection of the line defined by O and Q and the line passing through α(P) and the intersection with the axis of the line defined by P and Q.)
A central collineation is a homography defined by a (n+1) × (n+1) matrix that has an eigenspace of dimension n. It is a homology, if the matrix has another eigenvalue and is therefore diagonalizable. It is an elation, if all the eigenvalues are equal and the matrix is not diagonalizable.
The geometric view of a central collineation is easiest to see in a projective plane. Given a central collineation α, consider a line ℓ that does not pass through the center O, and its image under α, . Setting , the axis of α is some line M through R. The image of any point A of ℓ under α is the intersection of OA with ℓ. The image of a point B that does not belong to ℓ may be constructed in the following way: let , then .
The composition of two central collineations, while still a homography in general, is not a central collineation. In fact, every homography is the composition of a finite number of central collineations. In synthetic geometry, this property, which is a part of the fundamental theory of projective geometry is taken as the definition of homographies.
Fundamental theorem of projective geometry
There are collineations besides the homographies. In particular, any field automorphism σ of a field F induces a collineation of every projective space over F by applying σ to all homogeneous coordinates (over a projective frame) of a point. These collineations are called automorphic collineations.
The fundamental theorem of projective geometry consists of the three following theorems.
Given two projective frames of a projective space P, there is exactly one homography of P that maps the first frame onto the second one.
If the dimension of a projective space P is at least two, every collineation of P is the composition of an automorphic collineation and a homography. In particular, over the reals, every collineation of a projective space of dimension at least two is a homography.
Every homography is the composition of a finite number of perspectivities. In particular, if the dimension of the implied projective space is at least two, every homography is the composition of a finite number of central collineations.
If projective spaces are defined by means of axioms (synthetic geometry), the third part is simply a definition. On the other hand, if projective spaces are defined by means of linear algebra, the first part is an easy corollary of the definitions. Therefore, the proof of the first part in synthetic geometry, and the proof of the third part in terms of linear algebra both are fundamental steps of the proof of the equivalence of the two ways of defining projective spaces.
Homography groups
As every homography has an inverse mapping and the composition of two homographies is another, the homographies of a given projective space form a group. For example, the Möbius group is the homography group of any complex projective line.
As all the projective spaces of the same dimension over the same field are isomorphic, the same is true for their homography groups. They are therefore considered as a single group acting on several spaces, and only the dimension and the field appear in the notation, not the specific projective space.
Homography groups also called projective linear groups are denoted when acting on a projective space of dimension n over a field F. Above definition of homographies shows that may be identified to the quotient group , where is the general linear group of the invertible matrices, and F×I is the group of the products by a nonzero element of F of the identity matrix of size .
When F is a Galois field GF(q) then the homography group is written . For example, acts on the eight points in the projective line over the finite field GF(7), while , which is isomorphic to the alternating group A5, is the homography group of the projective line with five points.
The homography group is a subgroup of the collineation group of the collineations of a projective space of dimension n. When the points and lines of the projective space are viewed as a block design, whose blocks are the sets of points contained in a line, it is common to call the collineation group the automorphism group of the design.
Cross-ratio
The cross-ratio of four collinear points is an invariant under the homography that is fundamental for the study of the homographies of the lines.
Three distinct points , and on a projective line over a field form a projective frame of this line. There is therefore a unique homography of this line onto that maps to , to 0, and to 1. Given a fourth point on the same line, the cross-ratio of the four points , , and , denoted , is the element of . In other words, if has homogeneous coordinates over the projective frame , then .
Over a ring
Suppose A is a ring and U is its group of units. Homographies act on a projective line over A, written P(A), consisting of points with projective coordinates. The homographies on P(A) are described by matrix mappings
When A is a commutative ring, the homography may be written
but otherwise the linear fractional transformation is seen as an equivalence:
The homography group of the ring of integers Z is modular group . Ring homographies have been used in quaternion analysis, and with dual quaternions to facilitate screw theory. The conformal group of spacetime can be represented with homographies where A is the composition algebra of biquaternions.
Periodic homographies
The homography is periodic when the ring is Z/nZ (the integers modulo n) since then
Arthur Cayley was interested in periodicity when he calculated iterates in 1879.
In his review of a brute force approach to periodicity of homographies, H. S. M. Coxeter gave this analysis:
A real homography is involutory (of period 2) if and only if . If it is periodic with period , then it is elliptic, and no loss of generality occurs by assuming that . Since the characteristic roots are exp(±hπi/m), where , the trace is .
| Mathematics | Non-Euclidean geometry | null |
3053322 | https://en.wikipedia.org/wiki/Capsule%20%28pharmacy%29 | Capsule (pharmacy) | In the manufacture of pharmaceuticals, encapsulation refers to a range of dosage forms—techniques used to enclose medicines—in a relatively stable shell known as a capsule, allowing them to, for example, be taken orally or be used as suppositories. The two main types of capsules are:
Hard-shelled capsules, which contain dry, powdered ingredients or miniature pellets made by e.g. processes of extrusion or spheronization. These are made in two-halves: a smaller-diameter "body" that is filled and then sealed using a larger-diameter "cap".
Soft-shelled capsules, primarily used for oils and for active ingredients that are dissolved or suspended in oil.
Both of these classes of capsules are made from aqueous solutions of gelling agents, such as animal protein (mainly gelatin) or plant polysaccharides or their derivatives (such as carrageenans and modified forms of starch and cellulose). Other ingredients can be added to the gelling agent solution including plasticizers such as glycerin or sorbitol to decrease the capsule's hardness, coloring agents, preservatives, disintegrants, lubricants and surface treatment.
Since their inception, capsules have been viewed by consumers as the most efficient method of taking medication. For this reason, producers of drugs such as OTC analgesics wanting to emphasize the strength of their product developed the "caplet", a portmanteau of "capsule-shaped tablet", to tie this positive association to more efficiently produced tablet pills, as well as being an easier-to-swallow shape than the usual disk-shaped tablet medication.
Single-piece gel encapsulation ("soft capsules")
In 1833, Mothes and Dublanc were granted a patent for a method to produce a single-piece gelatin capsule that was sealed with a drop of gelatin solution. They used individual iron molds for their process, filling the capsules individually with a medicine dropper. Later on, methods were developed that used sets of plates with pockets to form the capsules. Although some companies still use this method, the equipment is no longer produced commercially. All modern soft-gel encapsulation uses variations of a process developed by R. P. Scherer in 1933. His innovation used a rotary die to produce the capsules. They were then filled by blow molding. This method was high-yield, consistent, and reduced waste.
Softgels can be an effective delivery system for oral drugs, especially poorly soluble drugs. This is because the fill can contain liquid ingredients that help increase the solubility or permeability of the drug across the membranes in the body. Liquid ingredients are difficult to include in any other solid dosage form, such as a tablet. Softgels are also highly suited to potent drugs (for example, where the dose is <100 μg), where the highly reproducible filling process helps ensure each softgel has the same drug content, and because the operators are not exposed to any drug dust during the manufacturing process.
In 1949, the Lederle Laboratories division of the American Cyanamid Company developed the "Accogel" process, allowing powders to be accurately filled into soft gelatin capsules.
Two-piece gel encapsulation ("hard capsules")
James Murdoch of London patented the two-piece telescoping gelatin capsule in 1847. The capsules are made in two parts by dipping metal pins in the gelling agent solution. The capsules are supplied as closed units to the pharmaceutical manufacturer. Before use, the two halves are separated, and the capsule is filled with powder or more normally pellets made by the process of extrusion and spheronization (either by placing a compressed slug of powder into one half of the capsule or by filling one half of the capsule with loose powder) and the other half of the capsule is pressed on. With the compressed slug method, weight varies less between capsules. However, the machinery required to manufacture them is more complex.
The powder or spheroids inside the capsule contains the active ingredients and any excipients, such as binders, disintegrants, fillers, glidant, and preservatives.
Manufacturing materials
Gelatin capsules, informally called gel caps or gelcaps, are composed of gelatin manufactured from the collagen of animal skin or bone.
Vegetable capsules, introduced in 1989, are made from cellulose, a structural component in plants. The main ingredient of vegetarian capsules is hydroxypropyl methyl cellulose. In the 21st century, gelatin capsules are more broadly used than vegetarian capsules because the cost of production is lower.
Manufacturing equipment
The process of encapsulation of hard gelatin capsules can be done on manual, semi-automatic, and automatic capsule filling machines. Hard gelatin capsules are manufactured by the dipping method which is dipping, rotation, drying, stripping, trimming, and joining. Softgels are filled at the same time as they are produced and sealed on the rotary die of a fully automatic machine. Capsule fill weight is a critical attribute in encapsulation and various real-time fill weight monitoring techniques such as near-infrared spectroscopy (NIR) and vibrational spectroscopy are used, as well as in-line weight checks, to ensure product quality.
Volume is measured to the full line, which is customary to the top of the smaller-diameter body half. After capping, some ullage volume (airspace) remains in the finished capsule.
Standard sizes of two-piece capsules
| Biology and health sciences | General concepts_2 | Health |
3053507 | https://en.wikipedia.org/wiki/Crystal%20growth | Crystal growth | A crystal is a solid material whose constituent atoms, molecules, or ions are arranged in an orderly repeating pattern extending in all three spatial dimensions. Crystal growth is a major stage of a crystallization process, and consists of the addition of new atoms, ions, or polymer strings into the characteristic arrangement of the crystalline lattice. The growth typically follows an initial stage of either homogeneous or heterogeneous (surface catalyzed) nucleation, unless a "seed" crystal, purposely added to start the growth, was already present.
The action of crystal growth yields a crystalline solid whose atoms or molecules are close packed, with fixed positions in space relative to each other.
The crystalline state of matter is characterized by a distinct structural rigidity and very high resistance to deformation (i.e. changes of shape and/or volume). Most crystalline solids have high values both of Young's modulus and of the shear modulus of elasticity. This contrasts with most liquids or fluids, which have a low shear modulus, and typically exhibit the capacity for macroscopic viscous flow.
Overview
After successful formation of a stable nucleus, a growth stage ensues in which free particles (atoms or molecules) adsorb onto the nucleus and propagate its crystalline structure outwards from the nucleating site. This process is significantly faster than nucleation. The reason for such rapid growth is that real crystals contain dislocations and other defects, which act as a catalyst for the addition of particles to the existing crystalline structure. By contrast, perfect crystals (lacking defects) would grow exceedingly slowly. On the other hand, impurities can act as crystal growth inhibitors and can also modify crystal habit.
Nucleation
Nucleation can be either homogeneous, without the influence of foreign particles, or heterogeneous, with the influence of foreign particles. Generally, heterogeneous nucleation takes place more quickly since the foreign particles act as a scaffold for the crystal to grow on, thus eliminating the necessity of creating a new surface and the incipient surface energy requirements.
Heterogeneous nucleation can take place by several methods. Some of the most typical are small inclusions, or cuts, in the container the crystal is being grown on. This includes scratches on the sides and bottom of glassware. A common practice in crystal growing is to add a foreign substance, such as a string or a rock, to the solution, thereby providing nucleation sites for facilitating crystal growth and reducing the time to fully crystallize.
The number of nucleating sites can also be controlled in this manner. If a brand-new piece of glassware or a plastic container is used, crystals may not form because the container surface is too smooth to allow heterogeneous nucleation. On the other hand, a badly scratched container will result in many lines of small crystals. To achieve a moderate number of medium-sized crystals, a container which has a few scratches works best. Likewise, adding small previously made crystals, or seed crystals, to a crystal growing project will provide nucleating sites to the solution. The addition of only one seed crystal should result in a larger single crystal.
Mechanisms of growth
The interface between a crystal and its vapor can be molecularly sharp at temperatures well below the melting point. An ideal crystalline surface grows by the spreading of single layers, or equivalently, by the lateral advance of the growth steps bounding the layers. For perceptible growth rates, this mechanism requires a finite driving force (or degree of supercooling) in order to lower the nucleation barrier sufficiently for nucleation to occur by means of thermal fluctuations. In the theory of crystal growth from the melt, Burton and Cabrera have distinguished between two major mechanisms:
Non-uniform lateral growth
The surface advances by the lateral motion of steps which are one interplanar spacing in height (or some integral multiple thereof). An element of surface undergoes no change and does not advance normal to itself except during the passage of a step, and then it advances by the step height. It is useful to consider the step as the transition between two adjacent regions of a surface which are parallel to each other and thus identical in configuration—displaced from each other by an integral number of lattice planes. Note here the distinct possibility of a step in a diffuse surface, even though the step height would be much smaller than the thickness of the diffuse surface.
Uniform normal growth
The surface advances normal to itself without the necessity of a stepwise growth mechanism. This means that in the presence of a sufficient thermodynamic driving force, every element of surface is capable of a continuous change contributing to the advancement of the interface. For a sharp or discontinuous surface, this continuous change may be more or less uniform over large areas for each successive new layer. For a more diffuse surface, a continuous growth mechanism may require changes over several successive layers simultaneously.
Non-uniform lateral growth is a geometrical motion of steps—as opposed to motion of the entire surface normal to itself. Alternatively, uniform normal growth is based on the time sequence of an element of surface. In this mode, there is no motion or change except when a step passes via a continual change. The prediction of which mechanism will be operative under any set of given conditions is fundamental to the understanding of crystal growth. Two criteria have been used to make this prediction:
Whether or not the surface is diffuse: a diffuse surface is one in which the change from one phase to another is continuous, occurring over several atomic planes. This is in contrast to a sharp surface for which the major change in property (e.g. density or composition) is discontinuous, and is generally confined to a depth of one interplanar distance.
Whether or not the surface is singular: a singular surface is one in which the surface tension as a function of orientation has a pointed minimum. Growth of singular surfaces is known to requires steps, whereas it is generally held that non-singular surfaces can continuously advance normal to themselves.
Driving force
Consider next the necessary requirements for the appearance of lateral growth. It is evident that the lateral growth mechanism will be found when any area in the surface can reach a metastable equilibrium in the presence of a driving force. It will then tend to remain in such an equilibrium configuration until the passage of a step. Afterward, the configuration will be identical except that each part of the step will have advanced by the step height. If the surface cannot reach equilibrium in the presence of a driving force, then it will continue to advance without waiting for the lateral motion of steps.
Thus, Cahn concluded that the distinguishing feature is the ability of the surface to reach an equilibrium state in the presence of the driving force. He also concluded that for every surface or interface in a crystalline medium, there exists a critical driving force, which, if exceeded, will enable the surface or interface to advance normal to itself, and, if not exceeded, will require the lateral growth mechanism.
Thus, for sufficiently large driving forces, the interface can move uniformly without the benefit of either a heterogeneous nucleation or screw dislocation mechanism. What constitutes a sufficiently large driving force depends upon the diffuseness of the interface, so that for extremely diffuse interfaces, this critical driving force will be so small that any measurable driving force will exceed it. Alternatively, for sharp interfaces, the critical driving force will be very large, and most growth will occur by the lateral step mechanism.
Note that in a typical solidification or crystallization process, the thermodynamic driving force is dictated by the degree of supercooling.
Morphology
It is generally believed that the mechanical and other properties of the crystal are also pertinent to the subject matter, and that crystal morphology provides the missing link between growth kinetics and physical properties. The necessary thermodynamic apparatus was provided by Josiah Willard Gibbs' study of heterogeneous equilibrium. He provided a clear definition of surface energy, by which the concept of surface tension is made applicable to solids as well as liquids. He also appreciated that an anisotropic surface free energy implied a non-spherical equilibrium shape, which should be thermodynamically defined as the shape which minimizes the total surface free energy.
It may be instructional to note that whisker growth provides the link between the mechanical phenomenon of high strength in whiskers and the various growth mechanisms which are responsible for their fibrous morphologies. (Prior to the discovery of carbon nanotubes, single-crystal whiskers had the highest tensile strength of any materials known). Some mechanisms produce defect-free whiskers, while others may have single screw dislocations along the main axis of growth—producing high strength whiskers.
The mechanism behind whisker growth is not well understood, but seems to be encouraged by compressive mechanical stresses including mechanically induced stresses, stresses induced by diffusion of different elements, and thermally induced stresses. Metal whiskers differ from metallic dendrites in several respects. Dendrites are fern-shaped like the branches of a tree, and grow across the surface of the metal. In contrast, whiskers are fibrous and project at a right angle to the surface of growth, or substrate.
Diffusion-control
Very commonly when the supersaturation (or degree of supercooling) is high, and sometimes even when it is not high, growth kinetics may be diffusion-controlled, which means the transport of atoms or molecules to the growing nucleus is limiting the velocity of crystal growth. Assuming the nucleus in such a diffusion-controlled system is a perfect sphere, the growth velocity, corresponding to the change of the radius with time , can be determined with Fick’s Laws.
1. Fick' s Law: ,
where is the flux of atoms in the dimension of , is the diffusion coefficient and is the concentration gradient.
2. Fick' s Law: ,
where is the change of the concentration with time.
The first Law can be adjusted to the flux of matter onto a specific surface, in this case the surface of the spherical nucleus:
,
where now is the flux onto the spherical surface in the dimension of and being the area of the spherical nucleus. can also be expressed as the change of number of atoms in the nucleus over time, with the number of atoms in the nucleus being:
,
where is the volume of the spherical nucleus and is the atomic volume. Therefore, the change if number of atoms in the nucleus over time will be:
Combining both equations for the following expression for the growth velocity is obtained:
From second Fick’s Law for spheres the equation below can be obtained:
Assuming that the diffusion profile does not change over time but is only shifted with the growing radius it can be said that , which leads to being constant. This constant can be indicated with the letter and integrating will result in the following equation:
,
where is the radius of the nucleus, is the distance from the nucleus where the equilibrium concentration is recovered and is the concentration right at the surface of the nucleus. Now the expression for can be found by:
Therefore, the growth velocity for a diffusion-controlled system can be described as:
Under such diffusion controlled conditions, the polyhedral crystal form will be unstable, it will sprout protrusions at its corners and edges where the degree of supersaturation is at its highest level. The tips of these protrusions will clearly be the points of highest supersaturation. It is generally believed that the protrusion will become longer (and thinner at the tip) until the effect of interfacial free energy in raising the chemical potential slows the tip growth and maintains a constant value for the tip thickness.
In the subsequent tip-thickening process, there should be a corresponding instability of shape. Minor bumps or "bulges" should be exaggerated—and develop into rapidly growing side branches. In such an unstable (or metastable) situation, minor degrees of anisotropy should be sufficient to determine directions of significant branching and growth. The most appealing aspect of this argument, of course, is that it yields the primary morphological features of dendritic growth.
| Physical sciences | Crystallography | Physics |
3054853 | https://en.wikipedia.org/wiki/Three-dimensional%20space | Three-dimensional space | In geometry, a three-dimensional space (3D space, 3-space or, rarely, tri-dimensional space) is a mathematical space in which three values (coordinates) are required to determine the position of a point. Most commonly, it is the three-dimensional Euclidean space, that is, the Euclidean space of dimension three, which models physical space. More general three-dimensional spaces are called 3-manifolds.
The term may also refer colloquially to a subset of space, a three-dimensional region (or 3D domain), a solid figure.
Technically, a tuple of numbers can be understood as the Cartesian coordinates of a location in a -dimensional Euclidean space. The set of these -tuples is commonly denoted and can be identified to the pair formed by a -dimensional Euclidean space and a Cartesian coordinate system.
When , this space is called the three-dimensional Euclidean space (or simply "Euclidean space" when the context is clear). In classical physics, it serves as a model of the physical universe, in which all known matter exists. When relativity theory is considered, it can be considered a local subspace of space-time. While this space remains the most compelling and useful way to model the world as it is experienced, it is only one example of a 3-manifold. In this classical example, when the three values refer to measurements in different directions (coordinates), any three directions can be chosen, provided that these directions do not lie in the same plane. Furthermore, if these directions are pairwise perpendicular, the three values are often labeled by the terms width/breadth, height/depth, and length.
History
Books XI to XIII of Euclid's Elements dealt with three-dimensional geometry. Book XI develops notions of orthogonality and parallelism of lines and planes, and defines solids including parallelpipeds, pyramids, prisms, spheres, octahedra, icosahedra and dodecahedra. Book XII develops notions of similarity of solids. Book XIII describes the construction of the five regular Platonic solids in a sphere.
In the 17th century, three-dimensional space was described with Cartesian coordinates, with the advent of analytic geometry developed by René Descartes in his work La Géométrie and Pierre de Fermat in the manuscript Ad locos planos et solidos isagoge (Introduction to Plane and Solid Loci), which was unpublished during Fermat's lifetime. However, only Fermat's work dealt with three-dimensional space.
In the 19th century, developments of the geometry of three-dimensional space came with William Rowan Hamilton's development of the quaternions. In fact, it was Hamilton who coined the terms scalar and vector, and they were first defined within his geometric framework for quaternions. Three dimensional space could then be described by quaternions which had vanishing scalar component, that is, . While not explicitly studied by Hamilton, this indirectly introduced notions of basis, here given by the quaternion elements , as well as the dot product and cross product, which correspond to (the negative of) the scalar part and the vector part of the product of two vector quaternions.
It was not until Josiah Willard Gibbs that these two products were identified in their own right, and the modern notation for the dot and cross product were introduced in his classroom teaching notes, found also in the 1901 textbook Vector Analysis written by Edwin Bidwell Wilson based on Gibbs' lectures.
Also during the 19th century came developments in the abstract formalism of vector spaces, with the work of Hermann Grassmann and Giuseppe Peano, the latter of whom first gave the modern definition of vector spaces as an algebraic structure.
In Euclidean geometry
Coordinate systems
In mathematics, analytic geometry (also called Cartesian geometry) describes every point in three-dimensional space by means of three coordinates. Three coordinate axes are given, each perpendicular to the other two at the origin, the point at which they cross. They are usually labeled , and . Relative to these axes, the position of any point in three-dimensional space is given by an ordered triple of real numbers, each number giving the distance of that point from the origin measured along the given axis, which is equal to the distance of that point from the plane determined by the other two axes.
Other popular methods of describing the location of a point in three-dimensional space include cylindrical coordinates and spherical coordinates, though there are an infinite number of possible methods. For more, see Euclidean space.
Below are images of the above-mentioned systems.
Lines and planes
Two distinct points always determine a (straight) line. Three distinct points are either collinear or determine a unique plane. On the other hand, four distinct points can either be collinear, coplanar, or determine the entire space.
Two distinct lines can either intersect, be parallel or be skew. Two parallel lines, or two intersecting lines, lie in a unique plane, so skew lines are lines that do not meet and do not lie in a common plane.
Two distinct planes can either meet in a common line or are parallel (i.e., do not meet). Three distinct planes, no pair of which are parallel, can either meet in a common line, meet in a unique common point, or have no point in common. In the last case, the three lines of intersection of each pair of planes are mutually parallel.
A line can lie in a given plane, intersect that plane in a unique point, or be parallel to the plane. In the last case, there will be lines in the plane that are parallel to the given line.
A hyperplane is a subspace of one dimension less than the dimension of the full space. The hyperplanes of a three-dimensional space are the two-dimensional subspaces, that is, the planes. In terms of Cartesian coordinates, the points of a hyperplane satisfy a single linear equation, so planes in this 3-space are described by linear equations. A line can be described by a pair of independent linear equations—each representing a plane having this line as a common intersection.
Varignon's theorem states that the midpoints of any quadrilateral in form a parallelogram, and hence are coplanar.
Spheres and balls
A sphere in 3-space (also called a 2-sphere because it is a 2-dimensional object) consists of the set of all points in 3-space at a fixed distance from a central point . The solid enclosed by the sphere is called a ball (or, more precisely a 3-ball).
The volume of the ball is given by
and the surface area of the sphere is
Another type of sphere arises from a 4-ball, whose three-dimensional surface is the 3-sphere: points equidistant to the origin of the euclidean space . If a point has coordinates, , then characterizes those points on the unit 3-sphere centered at the origin.
This 3-sphere is an example of a 3-manifold: a space which is 'looks locally' like 3-D space. In precise topological terms, each point of the 3-sphere has a neighborhood which is homeomorphic to an open subset of 3-D space.
Polytopes
In three dimensions, there are nine regular polytopes: the five convex Platonic solids and the four nonconvex Kepler-Poinsot polyhedra.
Surfaces of revolution
A surface generated by revolving a plane curve about a fixed line in its plane as an axis is called a surface of revolution. The plane curve is called the generatrix of the surface. A section of the surface, made by intersecting the surface with a plane that is perpendicular (orthogonal) to the axis, is a circle.
Simple examples occur when the generatrix is a line. If the generatrix line intersects the axis line, the surface of revolution is a right circular cone with vertex (apex) the point of intersection. However, if the generatrix and axis are parallel, then the surface of revolution is a circular cylinder.
Quadric surfaces
In analogy with the conic sections, the set of points whose Cartesian coordinates satisfy the general equation of the second degree, namely,
where and are real numbers and not all of and are zero, is called a quadric surface.
There are six types of non-degenerate quadric surfaces:
Ellipsoid
Hyperboloid of one sheet
Hyperboloid of two sheets
Elliptic cone
Elliptic paraboloid
Hyperbolic paraboloid
The degenerate quadric surfaces are the empty set, a single point, a single line, a single plane, a pair of planes or a quadratic cylinder (a surface consisting of a non-degenerate conic section in a plane and all the lines of through that conic that are normal to ). Elliptic cones are sometimes considered to be degenerate quadric surfaces as well.
Both the hyperboloid of one sheet and the hyperbolic paraboloid are ruled surfaces, meaning that they can be made up from a family of straight lines. In fact, each has two families of generating lines, the members of each family are disjoint and each member one family intersects, with just one exception, every member of the other family. Each family is called a regulus.
In linear algebra
Another way of viewing three-dimensional space is found in linear algebra, where the idea of independence is crucial. Space has three dimensions because the length of a box is independent of its width or breadth. In the technical language of linear algebra, space is three-dimensional because every point in space can be described by a linear combination of three independent vectors.
Dot product, angle, and length
A vector can be pictured as an arrow. The vector's magnitude is its length, and its direction is the direction the arrow points. A vector in can be represented by an ordered triple of real numbers. These numbers are called the components of the vector.
The dot product of two vectors and is defined as:
The magnitude of a vector is denoted by . The dot product of a vector with itself is
which gives
the formula for the Euclidean length of the vector.
Without reference to the components of the vectors, the dot product of two non-zero Euclidean vectors and is given by
where is the angle between and .
Cross product
The cross product or vector product is a binary operation on two vectors in three-dimensional space and is denoted by the symbol ×. The cross product A × B of the vectors A and B is a vector that is perpendicular to both and therefore normal to the plane containing them. It has many applications in mathematics, physics, and engineering.
In function language, the cross product is a function .
The components of the cross product are and can also be written in components, using Einstein summation convention as where is the Levi-Civita symbol. It has the property that .
Its magnitude is related to the angle between and by the identity
The space and product form an algebra over a field, which is not commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Specifically, the space together with the product, is isomorphic to the Lie algebra of three-dimensional rotations, denoted . In order to satisfy the axioms of a Lie algebra, instead of associativity the cross product satisfies the Jacobi identity. For any three vectors and
One can in n dimensions take the product of vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions.
Abstract description
It can be useful to describe three-dimensional space as a three-dimensional vector space over the real numbers. This differs from in a subtle way. By definition, there exists a basis for . This corresponds to an isomorphism between and : the construction for the isomorphism is found here. However, there is no 'preferred' or 'canonical basis' for .
On the other hand, there is a preferred basis for , which is due to its description as a Cartesian product of copies of , that is, . This allows the definition of canonical projections, , where . For example, . This then allows the definition of the standard basis defined by
where is the Kronecker delta. Written out in full, the standard basis is
Therefore can be viewed as the abstract vector space, together with the additional structure of a choice of basis. Conversely, can be obtained by starting with and 'forgetting' the Cartesian product structure, or equivalently the standard choice of basis.
As opposed to a general vector space , the space is sometimes referred to as a coordinate space.
Physically, it is conceptually desirable to use the abstract formalism in order to assume as little structure as possible if it is not given by the parameters of a particular problem. For example, in a problem with rotational symmetry, working with the more concrete description of three-dimensional space assumes a choice of basis, corresponding to a set of axes. But in rotational symmetry, there is no reason why one set of axes is preferred to say, the same set of axes which has been rotated arbitrarily. Stated another way, a preferred choice of axes breaks the rotational symmetry of physical space.
Computationally, it is necessary to work with the more concrete description in order to do concrete computations.
Affine description
A more abstract description still is to model physical space as a three-dimensional affine space over the real numbers. This is unique up to affine isomorphism. It is sometimes referred to as three-dimensional Euclidean space. Just as the vector space description came from 'forgetting the preferred basis' of , the affine space description comes from 'forgetting the origin' of the vector space. Euclidean spaces are sometimes called Euclidean affine spaces for distinguishing them from Euclidean vector spaces.
This is physically appealing as it makes the translation invariance of physical space manifest. A preferred origin breaks the translational invariance.
Inner product space
The above discussion does not involve the dot product. The dot product is an example of an inner product. Physical space can be modelled as a vector space which additionally has the structure of an inner product. The inner product defines notions of length and angle (and therefore in particular the notion of orthogonality). For any inner product, there exist bases under which the inner product agrees with the dot product, but again, there are many different possible bases, none of which are preferred. They differ from one another by a rotation, an element of the group of rotations SO(3).
In calculus
Gradient, divergence and curl
In a rectangular coordinate system, the gradient of a (differentiable) function is given by
and in index notation is written
The divergence of a (differentiable) vector field F = U i + V j + W k, that is, a function , is equal to the scalar-valued function:
In index notation, with Einstein summation convention this is
Expanded in Cartesian coordinates (see Del in cylindrical and spherical coordinates for spherical and cylindrical coordinate representations), the curl ∇ × F is, for F composed of [Fx, Fy, Fz]:
where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively. This expands as follows:
In index notation, with Einstein summation convention this is
where is the totally antisymmetric symbol, the Levi-Civita symbol.
Line, surface, and volume integrals
For some scalar field f : U ⊆ Rn → R, the line integral along a piecewise smooth curve C ⊂ U is defined as
where r: [a, b] → C is an arbitrary bijective parametrization of the curve C such that r(a) and r(b) give the endpoints of C and .
For a vector field F : U ⊆ Rn → Rn, the line integral along a piecewise smooth curve C ⊂ U, in the direction of r, is defined as
where · is the dot product and r: [a, b] → C is a bijective parametrization of the curve C such that r(a) and r(b) give the endpoints of C.
A surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analog of the line integral. To find an explicit formula for the surface integral, we need to parameterize the surface of interest, S, by considering a system of curvilinear coordinates on S, like the latitude and longitude on a sphere. Let such a parameterization be x(s, t), where (s, t) varies in some region T in the plane. Then, the surface integral is given by
where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of x(s, t), and is known as the surface element. Given a vector field v on S, that is a function that assigns to each x in S a vector v(x), the surface integral can be defined component-wise according to the definition of the surface integral of a scalar field; the result is a vector.
A volume integral is an integral over a three-dimensional domain or region.
When the integrand is trivial (unity), the volume integral is simply the region's volume.
It can also mean a triple integral within a region D in R3 of a function and is usually written as:
Fundamental theorem of line integrals
The fundamental theorem of line integrals, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve.
Let . Then
Stokes' theorem
Stokes' theorem relates the surface integral of the curl of a vector field F over a surface Σ in Euclidean three-space to the line integral of the vector field over its boundary ∂Σ:
Divergence theorem
Suppose is a subset of (in the case of represents a volume in 3D space) which is compact and has a piecewise smooth boundary (also indicated with ). If is a continuously differentiable vector field defined on a neighborhood of , then the divergence theorem says:
The left side is a volume integral over the volume , the right side is the surface integral over the boundary of the volume . The closed manifold is quite generally the boundary of oriented by outward-pointing normals, and is the outward pointing unit normal field of the boundary . ( may be used as a shorthand for .)
In topology
Three-dimensional space has a number of topological properties that distinguish it from spaces of other dimension numbers. For example, at least three dimensions are required to tie a knot in a piece of string.
In differential geometry the generic three-dimensional spaces are 3-manifolds, which locally resemble .
In finite geometry
Many ideas of dimension can be tested with finite geometry. The simplest instance is PG(3,2), which has Fano planes as its 2-dimensional subspaces. It is an instance of Galois geometry, a study of projective geometry using finite fields. Thus, for any Galois field GF(q), there is a projective space PG(3,q) of three dimensions. For example, any three skew lines in PG(3,q) are contained in exactly one regulus.
| Mathematics | Geometry and topology | null |
3054910 | https://en.wikipedia.org/wiki/Juniperus%20scopulorum | Juniperus scopulorum | Juniperus scopulorum, the Rocky Mountain juniper, is a species of juniper native to western North America, from southwest Canada to the Great Plains of the United States and small areas of northern Mexico. They are the most widespread of all the New World junipers. They are relatively small trees, occasionally just a large bush or stunted snag. They tend to be found in isolated groves or even as single trees rather than as the dominant tree of a forest. Though they can survive fires, they are vulnerable to them especially when young and this is one of the factors that can limit their spread into grasslands.
Rocky Mountain junipers provide habitat and food for wildlife. They provide cover to a range of species, from small birds and mammals to deer and bighorn sheep. Their berry-like cones are eaten by many animals and their scaly leaves and small twigs are browsed in small amounts by large herbivores. The primary human use is in landscaping for aesthetic purposes, to shelter habitations, or attract fruit-eating birds. They are also used in small amounts for their insect repellent and rot-resistant wood or as firewood for heating.
Description
Juniperus scopulorum is a small evergreen tree that in favorable conditions may reach as much as in height. However, on sites with little water or intense sun it will only attain shrub height, and even those that reach tree size will more typically be tall in open juniper woodlands. Younger trees have a narrow pyramidal shape, but develop into a rounded, oval, or spreading and irregular crown when older. They may either have a single trunk or multiple stems. Trunks can be large on mature trees, in diameter. When the subsoil is difficult to penetrate and lacks moisure the roots of Juniperus scopulorum spread out. They are numerous and fibrous in the upper part of the soil. When soils are deep and well drained they will grow to a greater depth.
On twigs between in diameter the bark is smooth. On larger twigs and branches it becomes rough, coming off in thin strips. Juniperus scopulorum has red-brown bark on branches that can weather to grey on the trunk. The texture of the bark is rough and comes away in shreds on the trunk with brown bark showing in the cracks at times. Branches tend to grow outwards a short distance and then curve to growing upwards (ascending). In sheltered to somewhat shady locations the branches may hang downwards and be quite slender. The very ends, the branchlets, can either stand upright or hang down.
Young shoots are very slender. All the leaves are light to dark green, but are often covered in a waxy coating that gives a blue or white cast to the leaves (glaucous) making them appear blue-gray or blue. On immature trees they will be covered in sharp needle like "whip leaves" long that stick out from the shoots. The needles will not have the waxy coating on their upper surface. What appear to be green scales on the shoots of adult trees are the mature leaves which clasp the shoot in opposite pairs with the next pair down or up the shoot rotated a quarter turn (90°) around it (decussate). Occasionally they will be turned one third to make the shoot three sided. The scales either do not overlap or overlap for at most one-fifth of their length of .
The seed cones are berry-like that are round to somewhat irregular in shape with two lobes (globose to bilobed) and in diameter. They are dark blue-black in color, but will be pale blue-white when covered in natural wax. The berries most often contain two seeds, but may contain one or three; they are mature in about 18 months. The pollen cones are long, and shed their pollen in early spring, generally in April. It is usually dioecious, producing cones of only one sex on each tree, but is occasionally monoecious.
Chemistry
Rocky Mountain juniper is an aromatic plant. Essential oil extracted from the trunk is prominent in cis-thujopsene, α-pinene, cedrol, allo-aromadendrene epoxide, (E)-caryophyllene, and widdrol. Limb essential oil is primarily α-pinene and leaf essential oil is primarily sabinene. Experiments with deer have found that oxygenated monoterpenes, like sabinene, inhibit the gut bacteria of ruminants and deer show the expected preference for foliage lower in these chemicals.
Similar species
Juniperus scopulorum is closely related to eastern redcedar (Juniperus virginiana), and often hybridizes with it where their ranges meet on the Great Plains. It will also form hybrids with alligator juniper (Juniperus deppeana), creeping juniper (Juniperus horizontalis), oneseed juniper (Juniperus monosperma), and Utah juniper (Juniperus osteosperma). The population of juniper trees in Mexico near the former site of Colonia Pacheco, Chihuahua is a hybrid with Juniperus blancoi. There is some disagreement whether hybrids are formed with the oneseed juniper in the wild.
Taxonomy
Juniperus scopulorum was first described and named as a separate species by Charles Sprague Sargent in 1897. Previously trees had been identified as one of its two close relatives, Juniperus virginiana or Juniperus occidentalis. Parts of the species were described as Juniperus virginiana var. montana by George Vasey in 1876 and as Juniperus occidentalis var. pleiosperma by George Engelmann in 1877. Its proper classification has continued to be debated by botanists with Per Axel Rydberg proposing to move it to his new genus as Sabina scopulorum in 1900 and Albert Edward Murray publishing a paper in 1983 that reclassified it as a subspecies under the name Juniperus virginiana subsp. scopulorum.
Isolated populations of junipers occur close to sea level in the Puget Sound area in Washington Park near Anacortes and southwestern British Columbia in a park called Smugglers Cove. In both locales, there are a considerable number of young and old specimens. A 2007 paper showed that they are genetically distinct, and proposed that it be recognized as a new species Juniperus maritima. If valid, it is a cryptic species barely distinguishable on morphology, though it does differ in phenology, with the cones maturing in 14–16 months, and often has the tips of the seeds exposed at the cone apex. However, as of 2024 it is listed as a synonym by both Plants of the World Online (POWO) and World Flora Online (WFO).
As of 2024 Juniperus scopulorum is listed as an accepted species with no subspecies by POWO, WFO, and World Plants.
Names
The genus name Juniperus is classical Latin, rather than botanical Latin, and was the name used in antiquity for this type of tree. The species name (specific epithet), scopulorum, derives from Latin with the meaning "of rocky cliffs", a reference to its frequent occurrence in rocky areas. The most common of its English names, "Rocky Mountain juniper", was at first applied to Juniperus occidentalis in 1841. Because J. scopulorum was at first largely considered the same at the Eastern red cedar, no unique common name was required for it and when it was recognized as a species it was most often called "Rocky Mountain red cedar", a common name now applied to Thuja plicata. Other common names used in the United States include "river juniper", "mountain red cedar", "Colorado red cedar", "weeping juniper", and "Rocky Mountain redcedar". In one unusual locality in Spring Valley, Nevada they are known as "swamp cedar" for growing in a relatively wet canyon bottom. In Canada it is also known as the "western red-cedar" and similar variations in English and "genévrier des Montagnes Rocheuses" (literally juniper of the Rocky Mountains) and genévrier des montagnes du Colorado (juniper of the mountains of Colorado) in French. In casual conversation the trees will usually simply be called "cedars" or "junipers" without qualification by residents of the western United States.
Distribution and habitat
Rocky Mountain junipers are found across a wider range than any other new world juniper species, though it is almost nowhere a common species. More often they are scattered widely across the landscape in isolated groups, grove, or stands. The species is native to western North America, in Canada in south British Columbia and southwest Alberta, in the United States sporadically from Washington east to North Dakota, south to Arizona and also locally western Texas, and northernmost Mexico from Sonora east to Coahuila. It grows at elevations of on dry soils, often together with other juniper species. It requires at least of annual precipitation, though the average for its range is and it survives on Vancouver Island with as much as of precipitation. Though it grows in very dry environments in western North America and has great drought endurance, it is not as adapted to dry conditions as other western juniper species.
The trees are very numerous in the lower mountains and foothills where grasslands or scrublands transition to low forests. In the Southern Rocky Mountains, the Colorado Plateau, and parts of Nevada Juniperus scopulorum is associated with the various species of piñon pine as a key species of the piñon-juniper woodland. At edges and lower elevations the junipers are more numerous with a gradual transition to all piñons at higher elevations. It is also a minor part of forests above this such as ponderosa pine forests (Pinus ponderosa) and areas dominated by Gamble oak (Quercus gambelii). Starting in northern Colorado and northern Utah the Rocky Mountain Juniper dominates a woodland type named for the species and found through Idaho and Montana into southern Canada.
Though tolerant of a wide range of soil conditions, Juniperus scopulorum strongly prefers soils that are alkaline and high in calcium. They grow to their maximum size on deep, moist, but well draining soils with plenty of organic matter. More often they are found on poor, dry soils especially ones formed from basalt, limestone, sandstone, lavas, and shale. They are also tolerant of soils with a significant amount of clay or that have a subsoil that is naturally cemented together like hardpan. Though obtaining a greater size in more sheltered locations they will successfully grow on rock outcrops with no soil and on high ridges. In the mountains to the north of Colorado and Utah the trees grow on relatively dry sites, often south facing slopes. In the south it grows in more sheltered locations and caynons, with the transition occurring in Colorado.
In one instance it has adapted to quite extreme conditions for a juniper, growing on wet clay soils in Spring Valley, Nevada. There it grows in the valley bottom as an almost riparian species and also survives moderately salty water. A similar pattern is also found in the farthest south populations of the species found in Mexico. There it largely grows near streams in caynons.
Pleistocene distribution
Towards the end of the Last Glacial Period, from 13,500 to 10,000 years before the present, Rocky Mountain juniper grew at much lower elevations in what is now the great basin and desert Southwest. Evidence from pack rat middens show that plant vegetation zones were 300 to 1100 meters lower in elevation than they are at present. In the Southern Rockies in what is now Colorado and Wyoming juniper woodlands were about 600 meters lower than in the modern Holocene epoch. The relic groves still found on the Great Plains and the Laramie Basin in Wyoming are likely remnants of this older distribution. During the ice age the north of its present range was largely covered in glaciars and far too cold for it in areas not covered in ice with populations only reaching as far north as present day south-eastern Wyoming, southeastern Oregon, southern Idaho, and northern Colorado insolated refuges.
Ecology and conservation
At lower elevations, in the absence of fire, Juniperus scopulorum may be considered a climax species, one that comes late in the succession of plant species, and perhaps more adapted to stable environments. At higher elevations of the Intermountain West dominated by Douglas fir (Pseudotsuga menziesii), it may be considered more of a pioneer species. Rocky Mountain juniper is a relatively slow growing species with an average age (at one site) of eight years for saplings 30 centimeters in height. As a species they have difficulty becoming established on constantly dry sites and have greater success establishing in areas that catch temporary water, such as rocky crevices and slight depressions.
Until they are approximately 50 years old Rocky Mountain junipers are vulnerable to fire due to thin bark and relatively large concentrations of resins and oils. Trees that are burned have no ability to regenerate from the roots. Older trees are still vulnerable to fires, but may survive if they lack lower branches for a ground fire to climb into the crown of the tree. Because of this, fire is sometimes used as a method to control junipers in rangeland, but if there is not enough fuel on the ground the fire is less effective in killing trees targeted for removal. Most older trees show signs of having survived four to six fires in their lifetimes. Historically fire was one of the factors that maintained open, grassy plains and prevented the invasion of trees like Rocky Mountain juniper. With frequent fires they are restricted to rocky areas that have little to no fuel load to ignite the trees. Prior to European-American settlement of the west, fires typically reoccurred at intervals of 50–100 years in most forests including piñon-juniper woodlands.
The foliage of Juniperus scopulorum is heavily browsed by mule deer, particularly in the winter. Studies of their winter foraging habits show that together with big sagebrush (Artemisia tridentata) and bitterbrush (Purshia spp.) it may make up two-thirds of their diet in winter. However, when given a choice, mule deer prefer alligator juniper (Juniperus deppeana), with its lower content of volatile oils to that of Rocky Mountain juniper. The presence of cover in the form of small trees and large bushes, like Rocky Mountain Juniper, is also important to mule deer. When the trees are removed from a landscape there is more and higher quality food for them, but their numbers decrease, but when junipers repopulate a range deer numbers increase. Overpopulation of deer are factor in causing junipers to dominate an area.
In areas with many deer eating the young shoots, the trees will have a distinctive "browse-line" with bare limbs and trunk. Deer also show strong preferences for the foliage of certain "ice-cream trees" with deer making much more of an effort to browse upon them. The reason for this preference is unknown, but captive deer will show the same preference when offered branches trimmed from trees in more controlled experiments.
Two species of mites are known to use Rocky Mountain juniper leaf-scales as a food source, Oligonychus ununguis and Eurytetranychus admes. Usually they are a minor pest, but occasionally their numbers can explode and cause serious damage to trees.
The iridescent olive-green juniper hairstreak butterfly eats the leaves of this and other juniper species as a caterpillar. As adults, the males are usually found on or around juniper trees waiting for females. They have two flights per year and overwinter as pupae in the soil.
The parasitic plant juniper mistletoe (Phoradendron juniperinum) will use Rocky Mountain juniper as a host, along with other juniper species. Though harmful to the trees it is not as dangerous as the dwarf mistletoes which attack other conifer species. Once infected with juniper mistletoe it is very difficult or impossible to remove the parasite from the host. The mistletoe berries provide food for fruit eating birds in the winter.
In Montana a study of pine-juniper woodlands with Rocky Mountain junipers found that mourning doves will make use of them as a nesting site, though they prefer limber pines. A different study of piñon-juniper woodlands found that mourning doves prefer junipers as nesting locations. Another bird which makes use of them as a nesting location is the chipping sparrow. On the northern plains Rocky Mountain juniper stands support a wide variety of bird species, directly or indirectly. The American robin is one of the most frequently observed species in stands. Other birds observed year round in the groves include black-capped chickadees, black-billed magpies, and long-eared owls. The appropriately named juniper titmouse also makes use of J. sopulorum groves when available, though it does not favor one species of juniper in particular. Many songbirds enthusiastically eat the soft, slightly sweet cones including American robins, solitaires, and waxwings. The fruits are highly attractive to Townsend's solitaire, the mockingbird, pine grosbeak, and evening grosbeak. The Bohemian waxwing is especially noted for consuming large amounts of the berries. In a controlled experiment by Dr. Edgar Alexander Mearns a caged bird consumed 900 of them in five hours. Larger animals also consume the cones including black bears, bighorn sheep, and mule deer.
The seeds of Rocky Mountain juniper are initially reluctant to sprout. Due to a combination of chemical inhibitors and a waterproof coating on the seeds they only germinate at high numbers in their second year off the tree.
Conservation
In 2011 the IUCN evaluated Juniperus scopulorum as least concern as it is a widespread species with an increasing population and no other significant threats. Similarly NatureServe reviewed its status in 2016 and rated it globally secure (G5). They found populations of the species to be imperiled (S2) in Saskatchewan and Oklahoma. They also gave the populations in Alberta and Oregon the status of vulnerable (S3).
Notable trees
One particular individual, the Jardine Juniper in Utah, is thought to be over 1,500 years old, though some erroneous estimates of its age previously attributed 3,000 years to it. The oldest known tree in South Dakota is an unnamed tree north of the town of Custer. Found on a granite outcrop the tree presents a quite windblown and twisted appearance. A single core sample taken from the tree dated its germination to the year 1091 when observed in 1992. A dead trunk found in New Mexico was found to have 1,888 rings; other trees in the same area are suspected to exceed 2,000 years. The more typical longevity of individual trees is from 250 to 300 years of age.
The largest tree of this species is one in Logan Canyon, Cache National Forest, Utah. It was last reliably measured in 2014 as tall with it limbs spreading over . However, this tree is, as of 2016, reported to no longer be in good health.
Uses
The primary uses of Rocky Mountain juniper are as an ornamental tree in landscaping. It is also used for firewood, as a herb, and for its rot resistant wood.
Cultivation
Rocky Mountain Juniper is quite frequently used in gardens when a moderate to small-sized tree is needed for a location with medium moisture (mesic) to dry soil and low soil productivity. The tree is sometimes planted as a windbreak in the west and on the plains. It is also moderately popular subject of bonsai cultivation in the United States. There are over 100 named cultivars of the species in the plant trade.
'Blue arrow' is a cultivar with a narrow and erect (fastigiate) growth habit. At full growth it will be 3.6 to 4.5 meters tall and just 60 centimeters wide. It has a blue-gray cast, but is not as blue as the variety usually called 'skyrocket'. It is a recipient of the Royal Horticultural Society's Award of Garden Merit.
'Blue heaven' is another of the many fastigiate type cultivars. It has the typical blue-white cast to its foliage in summer, but it is more green colored in winter months. Size when full grown will be 4–5 meters tall and 90–120 centimeters in width. Like most varieties derived from Rocky Mountain juniper it is intolerant of hot, humid weather and constantly wet conditions and will usually succumb to root rots in muggy climates.
'Skyrocket' is a very frequently mentioned cultivar. It is a very popular ornamental plant in gardens, grown for its very slender, strictly erect growth habit. It is also sometimes listed as Juniperus virginiana 'Skyrocket' due to debate over the classification of the wild individual that is the parent of this cultivar. It was first introduced in 1949 under the name 'Pilaris 1' by Schuel Nursery in South Bend, Indiana. This cultivar is listed by Ohio State University Extension as being resistant, but not immune, to cedar-apple rust.
The cultivar 'Wichita Blue' is an all-male selection of the species. It has a conical shape, blue-green foliage, and grows slowly. It has the same winter hardiness as the species.
Like most junipers, Rocky Mountain Juniper can be infected by a number funguses. Cedar-apple rust (Gymnosporangium juniperi-virginianae) produces hard stem galls in winter of up to 5 centimeters in width on susceptible junipers. These are not seriously harmful to the juniper host, but in the spring the galls produce soft, gummy horns that release spores to infect apples and related plants in the rose family where it is a much more serious disease. For this reason it is frequently recommended to not plant junipers near desirable apple trees to reduce the spread of the disease. Rocky Mountain junipers are also susceptible to hawthorn rust (Gymnosporangium globosum), quince rust (Gymnosporangium clavipes), and juniper broom rust (Gymnosporangium nidus-avis). Treatment is only to trim out infection to improve the appearance of the tree as the infection is not threatening to the health of junipers. In Europe it is attacked by the juniper webber moth, Dichomeris marginella. The USDA plant hardiness zone range for the species is zone 3 to zone 7.
Wood
The wood of Rocky Mountain Juniper is quite rot-resistant when cured, and prior to the widespread adoption of the steel fence post they were often harvested to build fences in the American west. The wood is lighter in weight and not as hard as that of the Eastern red cedar. In strength, color, and appearance it is difficult to distinguish the two apart. The outer sapwood is light-colored while the inner heartwood is deep red with occasional streaks of white or purple. Due to the usually small size of their trunks they are not much utilized as timber except for making specialty products like "cedar" linings for closets or chests to repel moths. As a fuel wood it is only of fair quality. It has an excellent smell when burning, but produces poor coals, lots of sparks, and is moderately difficult to split.
Traditional uses
Some Plateau Indian tribes boiled an infusion from the leaves and inner bark to treat coughs and fevers. The cones were also sometimes boiled into a drink used as a laxative and to treat colds. Among many Native American cultures, the smoke of the burning juniper is used to drive away evil spirits prior to conducting a ceremony, such as a healing ceremony.
A small quantity of ripe berries can be eaten as an emergency food or as a sage-like seasoning for meat. The dried berries can be roasted and ground into a coffee substitute.
| Biology and health sciences | Cupressaceae | Plants |
3055080 | https://en.wikipedia.org/wiki/Ragamuffin%20cat | Ragamuffin cat | The Ragamuffin is a breed of domestic cat. It was once considered to be a variant of the Ragdoll cat but was established as a separate breed in 1994. Ragamuffins are notable for their friendly personalities and thick fur.
General description
The physical traits of the breed include a rectangular, broad-chested body with shoulders supporting a short neck. These cats are classified as having heavy bones and a "substantial" body type.
The head is a broad, modified wedge with a moderately rounded forehead with short or medium-short muzzle and an obvious nose dip. The muzzle is wide with puffy whisker pads. The body should appear rectangular with a broad chest and broad shoulders and moderately heavy muscling in the hindquarters, with the hindquarters being equally broad as the shoulders. A tendency toward a fatty pad in the lower abdomen is expected.
Fur length is to be slightly longer around the neck and outer edges of the face, resulting in the appearance of a ruff. Texture is to be soft, dense and silky. Ragamuffin kittens are usually born white and develop a color pattern as they mature. Every color and pattern is allowable, with or without white. Their coats can be solid color, stripes, spots or patches of white, black, blue, red, cream, chocolate, lilac, cinnamon, seal brown or mixed colors. Their eyes can be any solid color, with some exhibiting heterochromia.
History
The IRCA Cherubim Cats developed from 1971–1994 (23 years) were used as the foundation cats for the Ragamuffin breed and included the IRCA Miracle Ragdolls, Ragdolls, Honey Bears, and Maxamillion lines.
In contrast, their cousin the Ragdoll breed was founded with only the IRCA Ragdoll lines developed from 1971–1975 (4 years).
Currently, acceptable outcrossings are as follows:
ACFA (Siberian),
CFA (Long Haired Selkirk Rex, Straight),
GCCF (British Longhair).
Ragamuffin background
Ragamuffins originated from the Ragdoll breed in the 1960s by Ann Baker in California. Originally one of the characteristics of the Ragdoll was its tendency to be limp and comfortable when handled, and it was from these characteristics that it got its name. A group of Ragdoll breeders aspired to create a breed that would keep the positive features of the Ragdoll while allowing for more breeding freedom. This breed was created in an effort to achieve this purpose. Thus, the Ragamuffin breed came into existence with its nature along with a wide range of colors and patterns. Today these cats bring charm and love to households worldwide as a combination.
In 1975, after a group of IRCA Ragdoll breeders left, Baker decided to spurn traditional cat breeding associations. She trademarked the name "Ragdoll" and “Cherubim” and set up her own registry, International Ragdoll Cat Association (IRCA). Baker imposed stringent standards on anyone who wanted to breed or sell cats under that name. The IRCA Ragdolls were also not allowed to be registered in other breed associations.
Breed divergence
In 1994, a group of IRCA breeders decided to leave and form their own group because of the increasing restrictions. Owing to Baker's trademark on the name Ragdoll and Cherubim, the group renamed its stock of IRCA Cherubim Cats Ragamuffins. While the originally proposed name was Liebling, the name Ragamuffin was put forth as an alternative by Curt Gehm, one of the group's founders, and it was chosen.
In the spirit of improving the breed's genetic health, personality, and temperament, the group selectively allowed a limited amount of outcross to Domestic Longhair cats that appeared to already fit the Standard of Perfection established in ACFA. Later, once the Domestic Longhair Cat allowance expired, outcrosses allowed historically include Persians. The group also allowed some limited outcrossing to IRCA Ragdolls initially (ended in 2010 for ACFA-recognized Ragamuffins). Only cats with at least one Ragamuffin parent and an accepted outcross in ACFA/CFA/GCCF currently qualify to be called Authentic Ragamuffins. (Cat Fanciers' Association, American Cat Fanciers Association, Governing Council of the Cat Fancy.)
The first cat association to accept the breed at full show champion status was the United Feline Organization (UFO), and shortly that same year it was accepted into the American Cat Fanciers Association (ACFA). Finally, the Cat Fanciers' Association (CFA) accepted them into the Miscellaneous class in February 2003 and advanced them to Championship class in February 2011.
The most obvious difference between typical Ragamuffins and Ragdolls is the required point coloration in Ragdolls, where as the Ragamuffin is allowed any color and pattern. The Standard of Perfection describes the Ragamuffin as requiring a 'sweet' overall expression with large, rounded with pinch at the corner, walnut-shaped eyes versus the Ragdoll's thinner, slightly angled almond-shaped eyes. Adding to the sweet expression, Ragamuffins have rounded contours between the ears and a nose scoop versus the Ragdoll which calls for flat planes. Ragamuffins call for a flatter topline and Ragdolls call for a more angular topline with the raised hind quarter. Ragamuffin coats are to be plush in texture and the Ragdoll allows for both silky or plush coats.
Color forms
Ragamuffins come in all patterns and colors, although colorpoints are permitted to be registered and bred they are not allowed to be shown in CFA or GCCF. Their eyes can be any solid color, with some exhibiting heterochromia.
| Biology and health sciences | Cats | Animals |
17384301 | https://en.wikipedia.org/wiki/Liver | Liver | The liver is a major metabolic organ exclusively found in vertebrate animals, which performs many essential biological functions such as detoxification of the organism, and the synthesis of proteins and various other biochemicals necessary for digestion and growth. In humans, it is located in the right upper quadrant of the abdomen, below the diaphragm and mostly shielded by the lower right rib cage. Its other metabolic roles include carbohydrate metabolism, the production of hormones, conversion and storage of nutrients such as glucose and glycogen, and the decomposition of red blood cells.
The liver is also an accessory digestive organ that produces bile, an alkaline fluid containing cholesterol and bile acids, which emulsifies and aids the breakdown of dietary fat. The gallbladder, a small hollow pouch that sits just under the right lobe of liver, stores and concentrates the bile produced by the liver, which is later excreted to the duodenum to help with digestion. The liver's highly specialized tissue, consisting mostly of hepatocytes, regulates a wide variety of high-volume biochemical reactions, including the synthesis and breakdown of small and complex organic molecules, many of which are necessary for normal vital functions. Estimates regarding the organ's total number of functions vary, but is generally cited as being around 500. For this reason, the liver has sometimes been described as the body's chemical factory.
It is not known how to compensate for the absence of liver function in the long term, although liver dialysis techniques can be used in the short term. Artificial livers have not been developed to promote long-term replacement in the absence of the liver. , liver transplantation is the only option for complete liver failure.
Structure
The liver is a dark reddish brown, wedge-shaped organ with two lobes of unequal size and shape. A human liver normally weighs approximately and has a width of about . There is considerable size variation between individuals, with the standard reference range for men being and for women . It is both the heaviest internal organ and the largest gland in the human body. It is located in the right upper quadrant of the abdominal cavity, resting just below the diaphragm, to the right of the stomach, and overlying the gallbladder.
The liver is connected to two large blood vessels: the hepatic artery and the portal vein. The hepatic artery carries oxygen-rich blood from the aorta via the celiac trunk, whereas the portal vein carries blood rich in digested nutrients from the entire gastrointestinal tract and also from the spleen and pancreas. These blood vessels subdivide into small capillaries known as liver sinusoids, which then lead to hepatic lobules.
Hepatic lobules are the functional units of the liver. Each lobule is made up of millions of hepatic cells (hepatocytes), which are the basic metabolic cells. The lobules are held together by a fine, dense, irregular, fibroelastic connective tissue layer extending from the fibrous capsule covering the entire liver known as Glisson's capsule after British doctor Francis Glisson. This tissue extends into the structure of the liver by accompanying the blood vessels, ducts, and nerves at the hepatic hilum. The whole surface of the liver, except for the bare area, is covered in a serous coat derived from the peritoneum, and this firmly adheres to the inner Glisson's capsule.
Gross anatomy
Terminology related to the liver often starts in hepat- from ἡπατο-, from the Greek word for liver.
Lobes
The liver is grossly divided into two parts when viewed from above – a right and a left lobe – and four parts when viewed from below (left, right, caudate, and quadrate lobes).
The falciform ligament makes a superficial division of the liver into a left and right lobe. From below, the two additional lobes are located between the right and left lobes, one in front of the other. A line can be imagined running from the left of the vena cava and all the way forward to divide the liver and gallbladder into two halves. This line is called Cantlie's line.
Other anatomical landmarks include the ligamentum venosum and the round ligament of the liver, which further divide the left side of the liver in two sections. An important anatomical landmark, the porta hepatis, divides this left portion into four segments, which can be numbered starting at the caudate lobe as I in an anticlockwise manner. From this parietal view, seven segments can be seen, because the eighth segment is only visible in the visceral view.
Surfaces
On the diaphragmatic surface, apart from a triangular bare area where it connects to the diaphragm, the liver is covered by a thin, double-layered membrane, the peritoneum, that helps to reduce friction against other organs. This surface covers the convex shape of the two lobes where it accommodates the shape of the diaphragm. The peritoneum folds back on itself to form the falciform ligament and the right and left triangular ligaments.
These peritoneal ligaments are not related to the anatomic ligaments in joints, and the right and left triangular ligaments have no known functional importance, though they serve as surface landmarks. The falciform ligament functions to attach the liver to the posterior portion of the anterior body wall.
The visceral surface or inferior surface is uneven and concave. It is covered in peritoneum apart from where it attaches the gallbladder and the porta hepatis. The fossa of gallbladder lies to the right of the quadrate lobe, occupied by the gallbladder with its cystic duct close to the right end of porta hepatis.
Impressions
Several impressions on the surface of the liver accommodate the various adjacent structures and organs. Underneath the right lobe and to the right of the gallbladder fossa are two impressions, one behind the other and separated by a ridge. The one in front is a shallow colic impression, formed by the hepatic flexure and the one behind is a deeper renal impression accommodating part of the right kidney and part of the suprarenal gland.
The suprarenal impression is a small, triangular, depressed area on the liver. It is located close to the right of the fossa, between the bare area and the caudate lobe, and immediately above the renal impression. The greater part of the suprarenal impression is devoid of peritoneum and it lodges the right suprarenal gland.
Medial to the renal impression is a third and slightly marked impression, lying between it and the neck of the gall bladder. This is caused by the descending portion of the duodenum, and is known as the duodenal impression.
The inferior surface of the left lobe of the liver presents behind and to the left of the gastric impression. This is moulded over the upper front surface of the stomach, and to the right of this is a rounded eminence, the tuber omentale, which fits into the concavity of the lesser curvature of the stomach and lies in front of the anterior layer of the lesser omentum.
Microscopic anatomy
Microscopically, each liver lobe is seen to be made up of hepatic lobules. The lobules are roughly hexagonal, and consist of plates of hepatocytes, and sinusoids radiating from a central vein towards an imaginary perimeter of interlobular portal triads. The central vein joins to the hepatic vein to carry blood out from the liver. A distinctive component of a lobule is the portal triad, which can be found running along each of the lobule's corners. The portal triad consists of the hepatic artery, the portal vein, and the common bile duct. The triad may be seen on a liver ultrasound, as a Mickey Mouse sign with the portal vein as the head, and the hepatic artery, and the common bile duct as the ears.
Histology, the study of microscopic anatomy, shows two major types of liver cell: parenchymal cells and nonparenchymal cells. About 70–85% of the liver volume is occupied by parenchymal hepatocytes. Nonparenchymal cells constitute 40% of the total number of liver cells but only 6.5% of its volume. The liver sinusoids are lined with two types of cell, sinusoidal endothelial cells, and phagocytic Kupffer cells. Hepatic stellate cells are nonparenchymal cells found in the perisinusoidal space, between a sinusoid and a hepatocyte.
Additionally, intrahepatic lymphocytes are often present in the sinusoidal lumen.
Functional anatomy
The central area or hepatic hilum, includes the opening known as the porta hepatis which carries the common bile duct and common hepatic artery, and the opening for the portal vein. The duct, vein, and artery divide into left and right branches, and the areas of the liver supplied by these branches constitute the functional left and right lobes. The functional lobes are separated by the imaginary plane, Cantlie's line, joining the gallbladder fossa to the inferior vena cava. The plane separates the liver into the true right and left lobes. The middle hepatic vein also demarcates the true right and left lobes. The right lobe is further divided into an anterior and posterior segment by the right hepatic vein. The left lobe is divided into the medial and lateral segments by the left hepatic vein.
The hilum of the liver is described in terms of three plates that contain the bile ducts and blood vessels. The contents of the whole plate system are surrounded by a sheath. The three plates are the hilar plate, the cystic plate and the umbilical plate and the plate system is the site of the many anatomical variations to be found in the liver.
Couinaud classification system
In the widely used Couinaud system, the functional lobes are further divided into a total of eight subsegments based on a transverse plane through the bifurcation of the main portal vein. The caudate lobe is a separate structure that receives blood flow from both the right- and left-sided vascular branches. The Couinaud classification divides the liver into eight functionally independent liver segments. Each segment has its own vascular inflow, outflow and biliary drainage. In the centre of each segment are branches of the portal vein, hepatic artery, and bile duct. In the periphery of each segment is vascular outflow through the hepatic veins. The classification system uses the vascular supply in the liver to separate the functional units (numbered I to VIII) with unit 1, the caudate lobe, receiving its supply from both the right and the left branches of the portal vein. It contains one or more hepatic veins which drain directly into the inferior vena cava. The remainder of the units (II to VIII) are numbered in a clockwise fashion:
Gene and protein expression
About 20,000 protein coding genes are expressed in human cells and 60% of these genes are expressed in a normal, adult liver. Over 400 genes are more specifically expressed in the liver, with some 150 genes highly specific for liver tissue. A large fraction of the corresponding liver-specific proteins are mainly expressed in hepatocytes and secreted into the blood and constitute plasma proteins and hepatokines. Other liver-specific proteins are certain liver enzymes such as HAO1 and RDH16, proteins involved in bile synthesis such as BAAT and SLC27A5, and transporter proteins involved in the metabolism of drugs, such as ABCB11 and SLC2A2. Examples of highly liver-specific proteins include apolipoprotein A II, coagulation factors F2 and F9, complement factor related proteins, and the fibrinogen beta chain protein.
Development
Organogenesis, the development of the organs, takes place from the third to the eighth week during embryogenesis. The origins of the liver lie in both the ventral portion of the foregut endoderm (endoderm being one of the three embryonic germ layers) and the constituents of the adjacent septum transversum mesenchyme. In the human embryo, the hepatic diverticulum is the tube of endoderm that extends out from the foregut into the surrounding mesenchyme. The mesenchyme of septum transversum induces this endoderm to proliferate, to branch, and to form the glandular epithelium of the liver. A portion of the hepatic diverticulum (that region closest to the digestive tube) continues to function as the drainage duct of the liver, and a branch from this duct produces the gallbladder. Besides signals from the septum transversum mesenchyme, fibroblast growth factor from the developing heart also contributes to hepatic competence, along with retinoic acid emanating from the lateral plate mesoderm. The hepatic endodermal cells undergo a morphological transition from columnar to pseudostratified resulting in thickening into the early liver bud. Their expansion forms a population of the bipotential hepatoblasts. Hepatic stellate cells are derived from mesenchyme.
After migration of hepatoblasts into the septum transversum mesenchyme, the hepatic architecture begins to be established, with liver sinusoids and bile canaliculi appearing. The liver bud separates into the lobes. The left umbilical vein becomes the ductus venosus and the right vitelline vein becomes the portal vein. The expanding liver bud is colonized by hematopoietic cells. The bipotential hepatoblasts begin differentiating into biliary epithelial cells and hepatocytes. The biliary epithelial cells differentiate from hepatoblasts around portal veins, first producing a monolayer, and then a bilayer of cuboidal cells. In ductal plate, focal dilations emerge at points in the bilayer, become surrounded by portal mesenchyme, and undergo tubulogenesis into intrahepatic bile ducts. Hepatoblasts not adjacent to portal veins instead differentiate into hepatocytes and arrange into cords lined by sinusoidal epithelial cells and bile canaliculi. Once hepatoblasts are specified into hepatocytes and undergo further expansion, they begin acquiring the functions of a mature hepatocyte, and eventually mature hepatocytes appear as highly polarized epithelial cells with abundant glycogen accumulation. In the adult liver, hepatocytes are not equivalent, with position along the portocentrovenular axis within a liver lobule dictating expression of metabolic genes involved in drug metabolism, carbohydrate metabolism, ammonia detoxification, and bile production and secretion. WNT/β-catenin has now been identified to be playing a key role in this phenomenon.
At birth, the liver comprises roughly 4% of body weight and weighs on average about . Over the course of further development, it will increase to but will only take up 2.5–3.5% of body weight.
Hepatosomatic index (HSI) is the ratio of liver weight to body weight.
Fetal blood supply
In the growing fetus, a major source of blood to the liver is the umbilical vein, which supplies nutrients to the growing fetus. The umbilical vein enters the abdomen at the umbilicus and passes upward along the free margin of the falciform ligament of the liver to the inferior surface of the liver. There, it joins with the left branch of the portal vein. The ductus venosus carries blood from the left portal vein to the left hepatic vein and then to the inferior vena cava, allowing placental blood to bypass the liver. In the fetus, the liver does not perform the normal digestive processes and filtration of the infant liver because nutrients are received directly from the mother via the placenta. The fetal liver releases some blood stem cells that migrate to the fetal thymus, creating the (or T lymphocytes). After birth, the formation of blood stem cells shifts to the red bone marrow. After 2–5 days, the umbilical vein and ductus venosus are obliterated; the former becomes the round ligament of liver and the latter becomes the ligamentum venosum. In the disorders of cirrhosis and portal hypertension, the umbilical vein can open up again.
Unlike eutherian mammals, in marsupials the liver remains haematopoietic well after birth.
Functions
The various functions of the liver are carried out by the liver cells or hepatocytes. The liver is thought to be responsible for up to 500 separate functions, usually in combination with other systems and organs. Currently, no artificial organ or device is capable of reproducing all the functions of the liver. Some functions can be carried out by liver dialysis, an experimental treatment for liver failure. The liver also accounts for about 20% of resting total body oxygen consumption.
Blood supply
The liver receives a dual blood supply from the hepatic portal vein and hepatic arteries. The hepatic portal vein delivers around 75% of the liver's blood supply and carries venous blood drained from the spleen, gastrointestinal tract, and its associated organs. The hepatic arteries supply arterial blood to the liver, accounting for the remaining quarter of its blood flow. Oxygen is provided from both sources; about half of the liver's oxygen demand is met by the hepatic portal vein, and half is met by the hepatic arteries. The hepatic artery also has both alpha- and beta-adrenergic receptors; therefore, flow through the artery is controlled, in part, by the splanchnic nerves of the autonomic nervous system.
Blood flows through the liver sinusoids and empties into the central vein of each lobule. The central veins coalesce into hepatic veins, which leave the liver and drain into the inferior vena cava.
Biliary flow
The biliary tract is derived from the branches of the bile ducts. The biliary tract, also known as the biliary tree, is the path by which bile is secreted by the liver then transported to the first part of the small intestine, the duodenum. The bile produced in the liver is collected in bile canaliculi, small grooves between the faces of adjacent hepatocytes. The canaliculi radiate to the edge of the liver lobule, where they merge to form bile ducts. Within the liver, these ducts are termed intrahepatic bile ducts, and once they exit the liver, they are considered extrahepatic. The intrahepatic ducts eventually drain into the right and left hepatic ducts, which exit the liver at the transverse fissure, and merge to form the common hepatic duct. The cystic duct from the gallbladder joins with the common hepatic duct to form the common bile duct. The biliary system and connective tissue is supplied by the hepatic artery alone.
Bile either drains directly into the duodenum via the common bile duct, or is temporarily stored in the gallbladder via the cystic duct. The common bile duct and the pancreatic duct enter the second part of the duodenum together at the hepatopancreatic ampulla, also known as the ampulla of Vater.
Metabolism
The liver plays a major role in carbohydrate, protein, amino acid, and lipid metabolism.
Carbohydrate metabolism
The liver performs several roles in carbohydrate metabolism.
The liver synthesizes and stores around 100g of glycogen via glycogenesis, the formation of glycogen from glucose.
When needed, the liver releases glucose into the blood by performing glycogenolysis, the breakdown of glycogen into glucose.
The liver is also responsible for gluconeogenesis, which is the synthesis of glucose from certain amino acids, lactate, or glycerol. Adipose and liver cells produce glycerol by breakdown of fat, which the liver uses for gluconeogenesis.
Liver also does glyconeogenesis which is synthesis of glycogen from lactic acid.
Protein metabolism
The liver is responsible for the mainstay of protein metabolism, synthesis as well as degradation. All plasma proteins except Gamma-globulins are synthesised in the liver. It is also responsible for a large part of amino acid synthesis. The liver plays a role in the production of clotting factors, as well as red blood cell production. Some of the proteins synthesized by the liver include coagulation factors I (fibrinogen), II (prothrombin), V, VII, VIII, IX, X, XI, XII, XIII, as well as protein C, protein S and antithrombin. The liver is a major site of production for thrombopoietin, a glycoprotein hormone that regulates the production of platelets by the bone marrow.
Lipid metabolism
The liver plays several roles in lipid metabolism: it performs cholesterol synthesis, lipogenesis, and the production of triglycerides, and a bulk of the body's lipoproteins are synthesized in the liver. The liver plays a key role in digestion, as it produces and excretes bile (a yellowish liquid) required for emulsifying fats and help the absorption of vitamin K from the diet. Some of the bile drains directly into the duodenum, and some is stored in the gallbladder. The liver produces insulin-like growth factor 1, a polypeptide protein hormone that plays an important role in childhood growth and continues to have anabolic effects in adults.
Breakdown
The liver is responsible for the breakdown of insulin and other hormones. The liver breaks down bilirubin via glucuronidation, facilitating its excretion into bile.
The liver is responsible for the breakdown and excretion of many waste products. It plays a key role in breaking down or modifying toxic substances (e.g., methylation) and most medicinal products in a process called drug metabolism. This sometimes results in toxication, when the metabolite is more toxic than its precursor. Preferably, the toxins are conjugated to avail excretion in bile or urine. The liver converts ammonia into urea as part of the ornithine cycle or the urea cycle, and the urea is excreted in the urine.
Blood reservoir
Because the liver is an expandable organ, large quantities of blood can be stored in its blood vessels. Its normal blood volume, including both that in the hepatic veins and that in the hepatic sinuses, is about 450 milliliters, or almost 10 percent of the body's total blood volume. When high pressure in the right atrium causes backpressure in the liver, the liver expands, and 0.5 to 1 liter of extra blood is occasionally stored in the hepatic veins and sinuses. This occurs especially in cardiac failure with peripheral congestion. Thus, in effect, the liver is a large, expandable, venous organ capable of acting as a valuable blood reservoir in times of excess blood volume and capable of supplying extra blood in times of diminished blood volume.
Lymph production
Because the pores in the hepatic sinusoids are very permeable and allow ready passage of both fluid and proteins into the perisinusoidal space, the lymph draining from the liver usually has a protein concentration of about 6 g/dl, which is only slightly less than the protein concentration of plasma.
Also, the high permeability of the liver sinusoid epithelium allows large quantities of lymph to form. Therefore, about half of all the lymph formed in the body under resting conditions arises in the liver.
Other
The liver stores a multitude of substances, including vitamin A (1–2 years' supply), vitamin D (1–4 months' supply), vitamin B12 (3–5 years' supply), vitamin K, vitamin E, iron, copper, zinc, cobalt, molybdenum, etc.
Haemopoiesis - The formation of blood cells is called haemopoiesis. In embryonic stage RBC and WBC are formed by liver. In the first trimester fetus, the liver is the main site of red blood cell production. By the 32nd week of gestation, the bone marrow has almost completely taken over that task.
The liver helps in the purification of blood. The Kupffer cells of liver are phagocytic cells, helps in phagocytosis of dead blood cells and bacteria from the blood.
The liver is responsible for immunological effects – the mononuclear phagocyte system of the liver contains many immunologically active cells, acting as a 'sieve' for antigens carried to it via the portal system.
The liver produces albumin, the most abundant protein in blood serum. It is essential in the maintenance of oncotic pressure, and acts as a transport for fatty acids and steroid hormones.
The liver synthesizes angiotensinogen, a hormone that is responsible for raising the blood pressure when activated by renin, an enzyme that is released when the kidney senses low blood pressure.
The liver produces the enzyme catalase to break down hydrogen peroxide, a toxic oxidising agent, into water and oxygen.
Clinical significance
Disease
The liver is a vital organ and supports almost every other organ in the body. Because of its strategic location and multidimensional functions, the liver is prone to many diseases. The bare area of the liver is a site that is vulnerable to the passing of infection from the abdominal cavity to the thoracic cavity. Liver diseases may be diagnosed by liver function tests–blood tests that can identify various markers. For example, acute-phase reactants are produced by the liver in response to injury or inflammation.
The most common chronic liver disease is nonalcoholic fatty liver disease, which affects an estimated one-third of the world population.
Hepatitis is a common condition of inflammation of the liver. The most usual cause of this is viral, and the most common of these infections are hepatitis A, B, C, D, and E. Some of these infections are sexually transmitted. Inflammation can also be caused by other viruses in the family Herpesviridae such as the herpes simplex virus. Chronic (rather than acute) infection with hepatitis B virus or hepatitis C virus is the main cause of liver cancer. Globally, about 248 million individuals are chronically infected with hepatitis B (with 843,724 in the U.S.), and 142 million are chronically infected with hepatitis C (with 2.7 million in the U.S.). Globally there are about 114 million and 20 million cases of hepatitis A and hepatitis E respectively, but these generally resolve and do not become chronic. Hepatitis D virus is a "satellite" of hepatitis B virus (it can only infect in the presence of hepatitis B), and co-infects nearly 20 million people with hepatitis B, globally.
Hepatic encephalopathy is caused by an accumulation of toxins in the bloodstream that are normally removed by the liver. This condition can result in coma and can prove fatal. Budd–Chiari syndrome is a condition caused by blockage of the hepatic veins (including thrombosis) that drain the liver. It presents with the classical triad of abdominal pain, ascites and liver enlargement. Many diseases of the liver are accompanied by jaundice caused by increased levels of bilirubin in the system. The bilirubin results from the breakup of the hemoglobin of dead red blood cells; normally, the liver removes bilirubin from the blood and excretes it through bile.
Other disorders caused by excessive alcohol consumption are grouped under alcoholic liver diseases and these include alcoholic hepatitis, fatty liver, and cirrhosis. Factors contributing to the development of alcoholic liver diseases are not only the quantity and frequency of alcohol consumption, but can also include gender, genetics, and liver insult. Liver damage can also be caused by drugs, particularly paracetamol and drugs used to treat cancer. A rupture of the liver can be caused by a liver shot used in combat sports.
Primary biliary cholangitis is an autoimmune disease of the liver. It is marked by slow progressive destruction of the small bile ducts of the liver, with the intralobular ducts (Canals of Hering) affected early in the disease. When these ducts are damaged, bile and other toxins build up in the liver (cholestasis) and over time damages the liver tissue in combination with ongoing immune related damage. This can lead to scarring (fibrosis) and cirrhosis. Cirrhosis increases the resistance to blood flow in the liver, and can result in portal hypertension. Congested anastomoses between the portal venous system and the systemic circulation, can be a subsequent condition.
There are also many pediatric liver diseases, including biliary atresia, alpha-1 antitrypsin deficiency, alagille syndrome, progressive familial intrahepatic cholestasis, Langerhans cell histiocytosis and hepatic hemangioma a benign tumour the most common type of liver tumour, thought to be congenital. A genetic disorder causing multiple cysts to form in the liver tissue, usually in later life, and usually asymptomatic, is polycystic liver disease. Diseases that interfere with liver function will lead to derangement of these processes. However, the liver has a great capacity to regenerate and has a large reserve capacity. In most cases, the liver only produces symptoms after extensive damage.
Hepatomegaly refers to an enlarged liver and can be due to many causes. It can be palpated in a liver span measurement.
Consuming caffeine regularly may help safeguard individuals from liver cirrhosis. Additionally, it has been shown to slow the advancement of liver disease in those already affected, lower the risk of liver fibrosis, and provide a protective benefit against liver cancer for moderate coffee drinkers. A 2017 study revealed that the positive effects of caffeine on the liver were evident regardless of the coffee preparation method.
Symptoms
The classic symptoms of liver damage include the following:
Pale stools occur when stercobilin, a brown pigment, is absent from the stool. Stercobilin is derived from bilirubin metabolites produced in the liver.
Dark urine occurs when bilirubin mixes with urine
Jaundice (yellow skin and/or whites of the eyes) This is where bilirubin deposits in skin, causing an intense itch. Itching is the most common complaint by people who have liver failure. Often this itch cannot be relieved by drugs.
Swelling of the abdomen, and swelling of the ankles and feet occurs because the liver fails to make albumin.
Excessive fatigue occurs from a generalized loss of nutrients, minerals and vitamins.
Bruising and easy bleeding are other features of liver disease. The liver makes clotting factors, substances which help prevent bleeding. When liver damage occurs, these factors are no longer present and severe bleeding can occur.
Pain in the upper right quadrant can result from the stretching of Glisson's capsule in conditions of hepatitis and pre-eclampsia.
Diagnosis
The diagnosis of liver disease is made by liver function tests, groups of blood tests, that can readily show the extent of liver damage. If infection is suspected, then other serological tests will be carried out. A physical examination of the liver can only reveal its size and any tenderness, and some form of imaging such as an ultrasound or CT scan may also be needed.
Sometimes a liver biopsy will be necessary, and a tissue sample is taken through a needle inserted into the skin just below the rib cage. This procedure may be helped by a sonographer providing ultrasound guidance to an interventional radiologist.
Liver regeneration
The liver is the only human internal organ capable of natural regeneration of lost tissue; as little as 25% of a liver can regenerate into a whole liver. This is, however, not true regeneration but rather compensatory growth in mammals. The lobes that are removed do not regrow and the growth of the liver is a restoration of function, not original form. This contrasts with true regeneration where both original function and form are restored. In some other species, such as zebrafish, the liver undergoes true regeneration by restoring both shape and size of the organ. In the liver, large areas of the tissues are formed but for the formation of new cells there must be sufficient amount of material so the circulation of the blood becomes more active.
This is predominantly due to the hepatocytes re-entering the cell cycle. That is, the hepatocytes go from the quiescent G0 phase to the G1 phase and undergo mitosis. This process is activated by the p75 receptors. There is also some evidence of bipotential stem cells, called hepatic oval cells or ovalocytes (not to be confused with oval red blood cells of ovalocytosis), which are thought to reside in the canals of Hering. These cells can differentiate into either hepatocytes or cholangiocytes. Cholangiocytes are the epithelial lining cells of the bile ducts. They are cuboidal epithelium in the small interlobular bile ducts, but become columnar and mucus secreting in larger bile ducts approaching the porta hepatis and the extrahepatic ducts. Research is being carried out on the use of stem cells for the generation of an artificial liver.
Scientific and medical works about liver regeneration often refer to the Greek Titan Prometheus who was chained to a rock in the Caucasus where, each day, his liver was devoured by an eagle, only to grow back each night. The myth suggests the ancient Greeks may have known about the liver's remarkable capacity for self-repair.
Liver transplantation
Human liver transplants were first performed by Thomas Starzl in the United States and Roy Calne in Cambridge, England in 1963 and 1967, respectively.
Liver transplantation is the only option for those with irreversible liver failure. Most transplants are done for chronic liver diseases leading to cirrhosis, such as chronic hepatitis C, alcoholism, and autoimmune hepatitis. Less commonly, liver transplantation is done for fulminant hepatic failure, in which liver failure occurs rapidly over a period of days or weeks.
Liver allografts for transplant usually come from donors who have died from fatal brain injury. Living donor liver transplantation is a technique in which a portion of a living person's liver is removed (hepatectomy) and used to replace the entire liver of the recipient. This was first performed in 1989 for pediatric liver transplantation. Only 20 percent of an adult's liver (Couinaud segments 2 and 3) is needed to serve as a liver allograft for an infant or small child.
More recently, adult-to-adult liver transplantation has been done using the donor's right hepatic lobe, which amounts to 60 percent of the liver. Due to the ability of the liver to regenerate, both the donor and recipient end up with normal liver function if all goes well. This procedure is more controversial, as it entails performing a much larger operation on the donor, and indeed there were at least two donor deaths out of the first several hundred cases. A 2006 publication addressed the problem of donor mortality and found at least fourteen cases. The risk of postoperative complications (and death) is far greater in right-sided operations than that in left-sided operations.
With the recent advances of noninvasive imaging, living liver donors usually have to undergo imaging examinations for liver anatomy to decide if the anatomy is feasible for donation. The evaluation is usually performed by multidetector row computed tomography (MDCT) and magnetic resonance imaging (MRI). MDCT is good in vascular anatomy and volumetry. MRI is used for biliary tree anatomy. Donors with very unusual vascular anatomy, which makes them unsuitable for donation, could be screened out to avoid unnecessary operations.
Society and culture
Some cultures regard the liver as the seat of the soul. In Greek mythology, the gods punished Prometheus for revealing fire to humans by chaining him to a rock where a vulture (or an eagle) would peck out his liver, which would regenerate overnight (the liver is the only human internal organ that actually can regenerate itself to a significant extent). Many ancient peoples of the Near East and Mediterranean areas practiced a type of divination called haruspicy or hepatomancy, where they tried to obtain information by examining the livers of sheep and other animals.
In Plato, and in later physiology, the liver was thought to be the seat of the darkest emotions (specifically wrath, jealousy and greed) which drive men to action. The Talmud (tractate Berakhot 61b) refers to the liver as the seat of anger, with the gallbladder counteracting this. The Persian, Urdu, and Hindi languages ( or or ) refer to the liver in figurative speech to indicate courage and strong feelings, or "their best"; e.g., "This Mecca has thrown to you the pieces of its liver!". The term , literally "the strength (power) of my liver", is a term of endearment in Urdu. In Persian slang, is used as an adjective for any object which is desirable, especially women. In the Zulu language, the word for liver () is the same as the word for courage. In English the term 'lily-livered' is used to indicate cowardice from the medieval belief that the liver was the seat of courage.
Spanish also means "courage".
However the secondary meaning of Basque is "indolence".
In biblical Hebrew, the word for liver, (, stemmed or , similar to Arabic ), also means heavy and is used to describe the rich ("heavy" with possessions) and honor (presumably for the same reason). In the Book of Lamentations (2:11) it is used to describe the physiological responses to sadness by "my liver spilled to earth" along with the flow of tears and the overturning in bitterness of the intestines. On several occasions in the book of Psalms (most notably 16:9), the word is used to describe happiness in the liver, along with the heart (which beats rapidly) and the flesh (which appears red under the skin). Further usage as the self (similar to "your honor") is widely available throughout the old testament, sometimes compared to the breathing soul (Genesis 49:6, Psalms 7:6, etc.). An honorable hat was also referred to with this word (Job 19:9, etc.) and under that definition appears many times along with - grandeur.
These four meanings were used in preceding ancient Afro-Asiatic languages such as Akkadian and Ancient Egyptian preserved in classical Ethiopic Ge'ez language.
Food
Humans commonly eat the livers of mammals, fowl, and fish as food. Domestic pig, ox, lamb, calf, chicken, and goose livers are widely available from butchers and supermarkets. In the Romance languages, the anatomical word for "liver" (French foie, Spanish hígado, etc.) derives not from the Latin anatomical term, jecur, but from the culinary term ficatum, literally "stuffed with figs", referring to the livers of geese that had been fattened on figs. Animal livers are rich in iron, vitamin A and vitamin B12; and cod liver oil is commonly used as a dietary supplement.
Liver can be baked, boiled, broiled, fried, stir-fried, or eaten raw (asbeh nayeh or sawda naye in Lebanese cuisine, or liver sashimi in Japanese cuisine). In many preparations, pieces of liver are combined with pieces of meat or kidneys, as in the various forms of Middle Eastern mixed grill (e.g. meurav Yerushalmi). Well-known examples include liver pâté, foie gras, chopped liver, and leverpastej. Liver sausages, such as Braunschweiger and liverwurst, are also a valued meal. Liver sausages may also be used as spreads. A traditional South African delicacy, skilpadjies, is made of minced lamb's liver wrapped in netvet (caul fat), and grilled over an open fire. Traditionally, some fish livers were valued as food, especially the stingray liver. It was used to prepare delicacies, such as poached skate liver on toast in England, as well as the beignets de foie de raie and foie de raie en croute in French cuisine.
Giraffe liver
The Humr are one of the tribes in the Baggara ethnic group, native to southwestern Kordofan in Sudan who speak Shuwa (Chadian Arabic), make a non-alcoholic drink from the liver and bone marrow of the giraffe, which they call umm nyolokh. They claim it is intoxicating (Arabic سكران sakran), causing dreams and even waking hallucinations. Anthropologist Ian Cunnison accompanied the Humr on one of their giraffe-hunting expeditions in the late 1950s, and noted that:
It is said that a person, once he has drunk umm nyolokh, will return to giraffe again and again. Humr, being Mahdists, are strict abstainers [from alcohol] and a Humrawi is never drunk (sakran) on liquor or beer. But he uses this word to describe the effects which umm nyolokh has upon him.
Cunnison's remarkable account of an apparently psychoactive mammal found its way from a somewhat obscure scientific paper into more mainstream literature through a conversation between W. James of the Institute of Social and Cultural Anthropology at the University of Oxford and specialist on the use of hallucinogens and intoxicants in society, and R. Rudgley, who discussed it in a book on psychoactive drugs for general readers. He speculated that a hallucinogenic compound N,N-Dimethyltryptamine in the giraffe liver might account for the intoxicating properties claimed for umm nyolokh.
Cunnison, on the other hand, writing in 1958 found it hard to believe in the literal truth of the Humr's assertion that the drink was intoxicating:
I can only assume that there is no intoxicating substance in the drink, and that the effect it produces is simply a matter of convention, although it may be brought about subconsciously.
The study of entheogens in general – including entheogens of animal origin (e.g. hallucinogenic fish and toad venom) – has, however, made considerable progress in the sixty-odd years since Cunnison's report; the idea that some intoxicating substance might reside in giraffe livers may no longer be as far-fetched as it seemed to Cunnison. However, to date, proof (or disproof) still waits on detailed analyses of the organ and the beverage made from it.
Arrow/bullet poison
Certain Tungusic peoples of northeast Asia formerly prepared a type of arrow poison from rotting animal livers, which was, in later times, also applied to bullets. Russian anthropologist S. M. Shirokogoroff wrote that:
Formerly the using of poisoned arrows was common. For instance, among the Kumarčen, [a subgroup of the Oroqen] even in recent times, a poison was used which was prepared from decaying liver.
[Note] This has been confirmed by the Kumarčen. I am not competent to judge as to the chemical conditions of production of poison which is not destroyed by the heat of explosion. However, the Tungus themselves compare this method [of poisoning ammunition] with the poisoning of arrows.
Other animals
The liver is found in all vertebrates and is typically the largest internal organ. The internal structure of the liver is broadly similar in all vertebrates, though its form varies considerably in different species, and is largely determined by the shape and arrangement of the surrounding organs. Nonetheless, in most species, it is divided into right and left lobes; exceptions to this general rule include snakes, where the shape of the body necessitates a simple cigar-like form.
In neonatal marsupials, it is responsible for the production of blood cells.
An organ sometimes referred to as a liver is found associated with the digestive tract of the primitive chordate amphioxus. Although it performs many functions of a liver, it is not considered a "true" liver but rather a homolog of the vertebrate liver. The amphioxus hepatic caecum produces the liver-specific proteins vitellogenin, antithrombin, plasminogen, alanine aminotransferase, and insulin/insulin-like growth factor.
| Biology and health sciences | Biology | null |
13455478 | https://en.wikipedia.org/wiki/Animal%20coloration | Animal coloration | Animal colouration is the general appearance of an animal resulting from the reflection or emission of light from its surfaces. Some animals are brightly coloured, while others are hard to see. In some species, such as the peafowl, the male has strong patterns, conspicuous colours and is iridescent, while the female is far less visible.
There are several separate reasons why animals have evolved colours. Camouflage enables an animal to remain hidden from view. Animals use colour to advertise services such as cleaning to animals of other species; to signal their sexual status to other members of the same species; and in mimicry, taking advantage of the warning coloration of another species. Some animals use flashes of colour to divert attacks by startling predators. Zebras may possibly use motion dazzle, confusing a predator's attack by moving a bold pattern rapidly. Some animals are coloured for physical protection, with pigments in the skin to protect against sunburn, while some frogs can lighten or darken their skin for temperature regulation. Finally, animals can be coloured incidentally. For example, blood is red because the haem pigment needed to carry oxygen is red. Animals coloured in these ways can have striking natural patterns.
Animals produce colour in both direct and indirect ways. Direct production occurs through the presence of visible coloured cells known as pigment which are particles of coloured material such as freckles. Indirect production occurs by virtue of cells known as chromatophores which are pigment-containing cells such as hair follicles. The distribution of the pigment particles in the chromatophores can change under hormonal or neuronal control. For fishes it has been demonstrated that chromatophores may respond directly to environmental stimuli like visible light, UV-radiation, temperature, pH, chemicals, etc. colour change helps individuals in becoming more or less visible and is important in agonistic displays and in camouflage. Some animals, including many butterflies and birds, have microscopic structures in scales, bristles or feathers which give them brilliant iridescent colours. Other animals including squid and some deep-sea fish can produce light, sometimes of different colours. Animals often use two or more of these mechanisms together to produce the colours and effects they need.
History
Animal coloration has been a topic of interest and research in biology for centuries. In the classical era, Aristotle recorded that the octopus was able to change its coloration to match its background, and when it was alarmed.
In his 1665 book Micrographia, Robert Hooke describes the "fantastical" (structural, not pigment) colours of the Peacock's feathers:
According to Charles Darwin's 1859 theory of natural selection, features such as coloration evolved by providing individual animals with a reproductive advantage. For example, individuals with slightly better camouflage than others of the same species would, on average, leave more offspring. In his Origin of Species, Darwin wrote:
Henry Walter Bates's 1863 book The Naturalist on the River Amazons describes his extensive studies of the insects in the Amazon basin, and especially the butterflies. He discovered that apparently similar butterflies often belonged to different families, with a harmless species mimicking a poisonous or bitter-tasting species to reduce its chance of being attacked by a predator, in the process now called after him, Batesian mimicry.
Edward Bagnall Poulton's strongly Darwinian 1890 book The Colours of Animals, their meaning and use, especially considered in the case of insects argued the case for three aspects of animal coloration that are broadly accepted today but were controversial or wholly new at the time. It strongly supported Darwin's theory of sexual selection, arguing that the obvious differences between male and female birds such as the argus pheasant were selected by the females, pointing out that bright male plumage was found only in species "which court by day". The book introduced the concept of frequency-dependent selection, as when edible mimics are less frequent than the distasteful models whose colours and patterns they copy. In the book, Poulton also coined the term aposematism for warning coloration, which he identified in widely differing animal groups including mammals (such as the skunk), bees and wasps, beetles, and butterflies.
Frank Evers Beddard's 1892 book, Animal Coloration, acknowledged that natural selection existed but examined its application to camouflage, mimicry and sexual selection very critically. The book was in turn roundly criticised by Poulton.
Abbott Handerson Thayer's 1909 book Concealing-Coloration in the Animal Kingdom, completed by his son Gerald H. Thayer, argued correctly for the widespread use of crypsis among animals, and in particular described and explained countershading for the first time. However, the Thayers spoilt their case by arguing that camouflage was the sole purpose of animal coloration, which led them to claim that even the brilliant pink plumage of the flamingo or the roseate spoonbill was cryptic—against the momentarily pink sky at dawn or dusk. As a result, the book was mocked by critics including Theodore Roosevelt as having "pushed [the "doctrine" of concealing coloration] to such a fantastic extreme and to include such wild absurdities as to call for the application of common sense thereto."
Hugh Bamford Cott's 500-page book Adaptive Coloration in Animals, published in wartime 1940, systematically described the principles of camouflage and mimicry. The book contains hundreds of examples, over a hundred photographs and Cott's own accurate and artistic drawings, and 27 pages of references. Cott focussed especially on "maximum disruptive contrast", the kind of patterning used in military camouflage such as disruptive pattern material. Indeed, Cott describes such applications:
Animal coloration provided important early evidence for evolution by natural selection, at a time when little direct evidence was available.
Evolutionary reasons for animal coloration
Camouflage
One of the pioneers of research into animal coloration, Edward Bagnall Poulton classified the forms of protective coloration, in a way which is still helpful. He described: protective resemblance; aggressive resemblance; adventitious protection; and variable protective resemblance. These are covered in turn below.
Protective resemblance is used by prey to avoid predation. It includes special protective resemblance, now called mimesis, where the whole animal looks like some other object, for example when a caterpillar resembles a twig or a bird dropping. In general protective resemblance, now called crypsis, the animal's texture blends with the background, for example when a moth's colour and pattern blend in with tree bark.
Aggressive resemblance is used by predators or parasites. In special aggressive resemblance, the animal looks like something else, luring the prey or host to approach, for example when a flower mantis resembles a particular kind of flower, such as an orchid. In general aggressive resemblance, the predator or parasite blends in with the background, for example when a leopard is hard to see in long grass.
For adventitious protection, an animal uses materials such as twigs, sand, or pieces of shell to conceal its outline, for example when a caddis fly larva builds a decorated case, or when a decorator crab decorates its back with seaweed, sponges and stones.
In variable protective resemblance, an animal such as a chameleon, flatfish, squid or octopus changes its skin pattern and colour using special chromatophore cells to resemble whatever background it is currently resting on (as well as for signalling).
The main mechanisms to create the resemblances described by Poulton – whether in nature or in military applications – are crypsis, blending into the background so as to become hard to see (this covers both special and general resemblance); disruptive patterning, using colour and pattern to break up the animal's outline, which relates mainly to general resemblance; mimesis, resembling other objects of no special interest to the observer, which relates to special resemblance; countershading, using graded colour to create the illusion of flatness, which relates mainly to general resemblance; and counterillumination, producing light to match the background, notably in some species of squid.
Countershading was first described by the American artist Abbott Handerson Thayer, a pioneer in the theory of animal coloration. Thayer observed that whereas a painter takes a flat canvas and uses coloured paint to create the illusion of solidity by painting in shadows, animals such as deer are often darkest on their backs, becoming lighter towards the belly, creating (as zoologist Hugh Cott observed) the illusion of flatness, and against a matching background, of invisibility. Thayer's observation "Animals are painted by Nature, darkest on those parts which tend to be most lighted by the sky's light, and vice versa" is called Thayer's Law.
Signalling
Colour is widely used for signalling in animals as diverse as birds and shrimps. Signalling encompasses at least three purposes:
advertising, to signal a capability or service to other animals, whether within a species or not
sexual selection, where members of one sex choose to mate with suitably coloured members of the other sex, thus driving the development of such colours
warning, to signal that an animal is harmful, for example can sting, is poisonous or is bitter-tasting. Warning signals may be mimicked truthfully or untruthfully.
Advertising services
Advertising coloration can signal the services an animal offers to other animals. These may be of the same species, as in sexual selection, or of different species, as in cleaning symbiosis. Signals, which often combine colour and movement, may be understood by many different species; for example, the cleaning stations of the banded coral shrimp Stenopus hispidus are visited by different species of fish, and even by reptiles such as hawksbill sea turtles.
Sexual selection
Darwin observed that the males of some species, such as birds-of-paradise, were very different from the females.
Darwin explained such male-female differences in his theory of sexual selection in his book The Descent of Man. Once the females begin to select males according to any particular characteristic, such as a long tail or a coloured crest, that characteristic is emphasized more and more in the males. Eventually all the males will have the characteristics that the females are sexually selecting for, as only those males can reproduce. This mechanism is powerful enough to create features that are strongly disadvantageous to the males in other ways. For example, some male birds-of-paradise have wing or tail streamers that are so long that they impede flight, while their brilliant colours may make the males more vulnerable to predators. In the extreme, sexual selection may drive species to extinction, as has been argued for the enormous horns of the male Irish elk, which may have made it difficult for mature males to move and feed.
Different forms of sexual selection are possible, including rivalry among males, and selection of females by males.
Warning
Warning coloration (aposematism) is effectively the "opposite" of camouflage, and a special case of advertising. Its function is to make the animal, for example a wasp or a coral snake, highly conspicuous to potential predators, so that it is noticed, remembered, and then avoided. As Peter Forbes observes, "Human warning signs employ the same colours – red, yellow, black, and white – that nature uses to advertise dangerous creatures." Warning colours work by being associated by potential predators with something that makes the warning coloured animal unpleasant or dangerous. This can be achieved in several ways, by being any combination of:
distasteful, for example caterpillars, pupae and adults of the cinnabar moth, the monarch and the variable checkerspot butterfly have bitter-tasting chemicals in their blood. One monarch contains more than enough digitalis-like toxin to kill a cat, while a monarch extract makes starlings vomit.
foul-smelling, for example the skunk can eject a liquid with a long-lasting and powerful odour
aggressive and able to defend itself, for example honey badgers.
venomous, for example a wasp can deliver a painful sting, while snakes like the viper or coral snake can deliver a fatal bite.
Warning coloration can succeed either through inborn behaviour (instinct) on the part of potential predators, or through a learned avoidance. Either can lead to various forms of mimicry. Experiments show that avoidance is learned in birds, mammals, lizards, and amphibians, but that some birds such as great tits have inborn avoidance of certain colours and patterns such as black and yellow stripes.
Mimicry
Mimicry means that one species of animal resembles another species closely enough to deceive predators. To evolve, the mimicked species must have warning coloration, because appearing to be bitter-tasting or dangerous gives natural selection something to work on. Once a species has a slight, chance, resemblance to a warning coloured species, natural selection can drive its colours and patterns towards more perfect mimicry. There are numerous possible mechanisms, of which the best known are:
Batesian mimicry, where an edible species resembles a distasteful or dangerous species. This is most common in insects such as butterflies. A familiar example is the resemblance of harmless hoverflies (which have no sting) to bees.
Müllerian mimicry, where two or more distasteful or dangerous animal species resemble each other. This is most common among insects such as wasps and bees (hymenoptera).
Batesian mimicry was first described by the pioneering naturalist Henry W. Bates. When an edible prey animal comes to resemble, even slightly, a distasteful animal, natural selection favours those individuals that even very slightly better resemble the distasteful species. This is because even a small degree of protection reduces predation and increases the chance that an individual mimic will survive and reproduce. For example, many species of hoverfly are coloured black and yellow like bees, and are in consequence avoided by birds (and people).
Müllerian mimicry was first described by the pioneering naturalist Fritz Müller. When a distasteful animal comes to resemble a more common distasteful animal, natural selection favours individuals that even very slightly better resemble the target. For example, many species of stinging wasp and bee are similarly coloured black and yellow. Müller's explanation of the mechanism for this was one of the first uses of mathematics in biology. He argued that a predator, such as a young bird, must attack at least one insect, say a wasp, to learn that the black and yellow colours mean a stinging insect. If bees were differently coloured, the young bird would have to attack one of them also. But when bees and wasps resemble each other, the young bird need only attack one from the whole group to learn to avoid all of them. So, fewer bees are attacked if they mimic wasps; the same applies to wasps that mimic bees. The result is mutual resemblance for mutual protection.
Distraction
Startle
Some animals such as many moths, mantises and grasshoppers, have a repertoire of threatening or startling behaviour, such as suddenly displaying conspicuous eyespots or patches of bright and contrasting colours, so as to scare off or momentarily distract a predator. This gives the prey animal an opportunity to escape. The behaviour is deimatic (startling) rather than aposematic as these insects are palatable to predators, so the warning colours are a bluff, not an honest signal.
Motion dazzle
Some prey animals such as zebra are marked with high-contrast patterns which possibly help to confuse their predators, such as lions, during a chase. The bold stripes of a herd of running zebra have been claimed make it difficult for predators to estimate the prey's speed and direction accurately, or to identify individual animals, giving the prey an improved chance of escape. Since dazzle patterns (such as the zebra's stripes) make animals harder to catch when moving, but easier to detect when stationary, there is an evolutionary trade-off between dazzle and camouflage. There is evidence that the zebra's stripes could provide some protection from flies and biting insects.
Physical protection
Many animals have dark pigments such as melanin in their skin, eyes and fur to protect themselves against sunburn (damage to living tissues caused by ultraviolet light). Another example of photoprotective pigments are the GFP-like proteins in some corals. In some jellyfish, rhizostomins have also been hypothesized to protect against ultraviolet damage.
Temperature regulation
Some frogs such as Bokermannohyla alvarengai, which basks in sunlight, lighten their skin colour when hot (and darkens when cold), making their skin reflect more heat and so avoid overheating.
Incidental coloration
Some animals are coloured purely incidentally because their blood contains pigments. For example, amphibians like the olm that live in caves may be largely colorless as colour has no function in that environment, but they show some red because of the haem pigment in their red blood cells, needed to carry oxygen. They also have a little orange coloured riboflavin in their skin. Human albinos and people with fair skin have a similar colour for the same reason.
Mechanisms of colour production in animals
Animal coloration may be the result of any combination of pigments, chromatophores, structural coloration and bioluminescence.
Coloration by pigments
Pigments are coloured chemicals (such as melanin) in animal tissues. For example, the Arctic fox has a white coat in winter (containing little pigment), and a brown coat in summer (containing more pigment), an example of seasonal camouflage (a polyphenism). Many animals, including mammals, birds, and amphibians, are unable to synthesize most of the pigments that colour their fur or feathers, other than the brown or black melanins that give many mammals their earth tones. For example, the bright yellow of an American goldfinch, the startling orange of a juvenile red-spotted newt, the deep red of a cardinal and the pink of a flamingo are all produced by carotenoid pigments synthesized by plants. In the case of the flamingo, the bird eats pink shrimps, which are themselves unable to synthesize carotenoids. The shrimps derive their body colour from microscopic red algae, which like most plants are able to create their own pigments, including both carotenoids and (green) chlorophyll. Animals that eat green plants do not become green, however, as chlorophyll does not survive digestion.
Variable coloration by chromatophores
Chromatophores are special pigment-containing cells that may change their size, but more often retain their original size but allow the pigment within them to become redistributed, thus varying the colour and pattern of the animal. Chromatophores may respond to hormonal and/or neurobal control mechanisms, but direst responses to stimulation by visible light, UV-radiation, temperature, pH-changes, chemicals, etc. have also been documented. The voluntary control of chromatophores is known as metachrosis. For example, cuttlefish and chameleons can rapidly change their appearance, both for camouflage and for signalling, as Aristotle first noted over 2000 years ago:
Cephalopod molluscs like squid can voluntarily change their coloration by contracting or relaxationg small muscles around their chromatophores. The energy cost of the complete activation of the chromatophore system is very high, equalling nearly as much as all the energy used by an octopus at rest. Amphibians such as frogs have three kinds of star-shaped chromatophore cells in separate layers of their skin. The top layer contains 'xanthophores' with orange, red, or yellow pigments; the middle layer contains 'iridophores' with a silvery light-reflecting pigment; while the bottom layer contains 'melanophores' with dark melanin.
Structural coloration
While many animals are unable to synthesize carotenoid pigments to create red and yellow surfaces, the green and blue colours of bird feathers and insect carapaces are usually not produced by pigments at all, but by structural coloration. Structural coloration means the production of colour by microscopically-structured surfaces fine enough to interfere with visible light, sometimes in combination with pigments: for example, peacock tail feathers are pigmented brown, but their structure makes them appear blue, turquoise and green. Structural coloration can produce the most brilliant colours, often iridescent. For example, the blue/green gloss on the plumage of birds such as ducks, and the purple/blue/green/red colours of many beetles and butterflies are created by structural coloration. Animals use several methods to produce structural colour, as described in the table.
Bioluminescence
Bioluminescence is the production of light, such as by the photophores of marine animals, and the tails of glow-worms and fireflies. Bioluminescence, like other forms of metabolism, releases energy derived from the chemical energy of food. A pigment, luciferin is catalysed by the enzyme luciferase to react with oxygen, releasing light. Comb jellies such as Euplokamis are bioluminescent, creating blue and green light, especially when stressed; when disturbed, they secrete an ink which luminesces in the same colours. Since comb jellies are not very sensitive to light, their bioluminescence is unlikely to be used to signal to other members of the same species (e.g. to attract mates or repel rivals); more likely, the light helps to distract predators or parasites. Some species of squid have light-producing organs (photophores) scattered all over their undersides that create a sparkling glow. This provides counter-illumination camouflage, preventing the animal from appearing as a dark shape when seen from below.
Some anglerfish of the deep sea, where it is too dark to hunt by sight, contain symbiotic bacteria in the 'bait' on their 'fishing rods'. These emit light to attract prey.
| Biology and health sciences | Zoology: General | null |
2212484 | https://en.wikipedia.org/wiki/Eastern%20lowland%20gorilla | Eastern lowland gorilla | The eastern lowland gorilla (Gorilla beringei graueri) or Grauer's gorilla is a Critically Endangered subspecies of eastern gorilla endemic to the mountainous forests of eastern Democratic Republic of the Congo. Important populations of this gorilla live in the Kahuzi-Biega and Maiko National Parks and their adjacent forests, the Tayna Gorilla Reserve, the Usala forest and on the Itombwe Massif.
It is the largest of the four gorilla subspecies. It has a jet black coat like the mountain gorilla (Gorilla beringei beringei), although the hair is shorter on the head and body. The male's coat, like that of other gorillas, greys as the animal matures, resulting in the designation "silverback".
There are far fewer eastern lowland gorillas compared to western lowland gorillas. According to a 2004 report there were only about 5,000 eastern lowland gorillas in the wild, down to fewer than 3,800 in 2016, compared to over 100,000 western lowland gorillas. However, a survey in 2021 gave an estimate of up to 6,800 suggesting the decline was not as bad as feared although they are still facing severe threats. Outside their native range, only one female eastern lowland gorilla lives in captivity, at the Antwerp Zoo in Belgium.
Physical description
Eastern lowland gorillas are the largest subspecies of gorilla and the largest living primates. Males weigh between based on four males, females of although this had a small sample size. Males stand between , while females reach . An older weight calculated based on eight wild adult males is .
Habitat and ecology
Gorillas spend long hours feeding on plant matter every day. Gorillas are stable apes as they stay together for months and years at a time, much like the structure of a family. Groups of eastern lowland gorillas are usually larger than those of western gorillas.
The eastern lowland gorilla has the widest altitudinal range of any of the gorilla subspecies, being found in mountainous, transitional and lowland tropical forests. One of the most studied eastern lowland gorilla population lives in the highlands of Kahuzi-Biega, where habitats vary between dense primary forests to moderately moist woodland, to Cyperus swamp and peat bog.
Gorillas do not eat banana fruits, but they may destroy banana trees to eat the nutritious pith. The eastern lowland gorilla shows a preference for regenerating vegetation associated with abandoned villages and fields. Farmers who have come in contact with gorillas in their plantations have killed the gorilla and obtained a double benefit, protecting their crop and using the meat of the gorilla to sell at the market.
The eastern lowland gorilla has a varied plant diet including fruits, leaves, stems and bark as well as small insects such as ants and termites. Although they occasionally eat ants, insects form only a minor part of their diet. In comparison to western lowland gorillas, found in low altitude tropical forests, eastern lowland gorillas travel much less and increase their consumption of herbaceous vegetation.
Behaviour
Eastern lowland gorillas are highly sociable and very peaceful, living in groups of two to over 30. A group usually consists of one silverback, several females and their offspring. Silverbacks are strong and each group has one dominant leader (see alpha male). These males protect their group from danger. Young silverback males will slowly begin to leave their natal group when they reach maturity, and will then attempt to attract females to form their own group.
Relatively little is known about the social behaviour, history and ecology of eastern lowland gorillas, partly because of civil war in the Democratic Republic of the Congo. However, some aspects of social behaviour have been studied. For example, gorillas form harems which may include two full-grown males. One third of gorilla groups in East Africa have two grown males in their group.
Most primates are bonded together by the relationship between females, a pattern also seen in many human families. Once they reach maturity, both females and males usually leave the group. Females usually join another group or a lone silverback adult male, whereas males may stay together temporarily, until they attract females and establish their own groups. It is commonly believed that the structure of the gorilla group is to prevent predation.
Reproduction
A female will give birth to a single infant after a gestation period of about months. They breastfeed for about three years. The baby can crawl at around nine weeks old and can walk at about 35 weeks old. Infant gorillas normally stay with their mother for three to four years and mature at around 8 years old (females) and 12 years old (males).
Threats
Threats to the eastern lowland gorilla's survival include poaching, civil unrest, and the destruction of gorilla habitat through logging, mining, and agriculture.
Bushmeat
The primary cause of the decline in eastern lowland gorilla populations is poaching for meat, known as bushmeat. It is eaten by displaced peoples residing in the region affected by the civil war, militia groups and loggers and miners. Surveys have shown that great apes, chimpanzees and bonobos comprise 0.5–2% of the meat found in bushmeat markets. Some researchers have found that up to of bushmeat are traded annually. This has a detrimental effect on the eastern lowland gorilla populations because of their slow rate of reproduction and their already struggling population. Although gorilla bushmeat only constitutes a small proportion of the bushmeat sold, it continues to encourage a decline in the gorilla populations being subjected to hunting. Endangered Species International stated that 300 gorillas are killed each year to supply the bushmeat markets in the Congo.
Civil unrest
Civil unrest in the Democratic Republic of Congo has resulted in a decline in eastern lowland gorillas. The region inhabited by eastern gorillas has decreased from in the past 50 years. This primate species now occupies only 13% of its historical area. Violence in the region has made research difficult, however, scientists have estimated that the population has decreased by more than 50% since the mid-1990s. In the mid-1990s, the population was recorded to be nearly 17,000 gorillas.
The civil war in the Democratic Republic of the Congo means military groups remain in the forest for long periods of time. Thus, poaching has increased as militia and refugees become hungry. Military leaders have also disarmed the park security guards in national parks meaning they have virtually no control over the activities that occur within the park, and those that enter it, when faced with armed soldiers. The militia groups present in the region restrict protection of the eastern lowland gorilla. It has been estimated that more than half of the 240 gorillas known in one study have been killed as a result of poaching. Researchers have also stated that it is more difficult to patrol areas outside of the park and expect to find even higher levels of poaching.
Conservation groups negotiated with rebels who control the eastern Democratic Republic of the Congo to re-arm the park guards After the war began, government funding of the park was stopped. Conservation groups, International Gorilla Conservation Program and Deutsche Gesellschaft für Technische Zusammenarbeit (German development agency) have funded the guards for the past several years.
Many multinational corporations are indirectly, and some directly, funding the civil war in the Democratic Republic of the Congo by buying illegal resources from the area or by trading resources for military weaponry. Reports from 2007 state that of cassiterite ($45 million USD), of wolframite (worth $4.27 million USD) and of coltan ($5.42 million USD) were exported in 2007. Coltan in particular is one of the main export resources bought by multinational corporations illegally and is growing in demand due to its use for cellphones. Traxy's alone bought of coltan in 2007 which is 57% of the Democratic Republic of the Congo's entire coltan. The United Nations Environmental Programme reported that resources from multinational corporations and pension funds in industrialized countries are "directed through subsidiary companies to help finance corruption and arms sales, processes that may involve 'conflict' natural resources" Private companies have been found to trade weapons for resources or provide access to weapons through subsidiary companies.
Approximately two million people, directly and indirectly related to the Rwandan genocide in 1994, fled to Tanzania and the Democratic Republic of the Congo, mainly in Virunga National Park. It has been estimated that there were 720,000 refugees living in five camps in the DRC bordering the park (Katale, Kahindo, Kibumba, Mugunga and Lac Vert), 24. Deforestation occurred as 80,000 refugees travelled into the park daily to find wood. Deforestation occurred at a rate of 0.1 km2 per day. Once the Congo war began in 1996, 500,000 refugees remained, putting pressure on the natural resources, including the eastern lowland gorilla.
Logging, mining, and agriculture
Illegal logging may occur from companies with no rights to the land or by legal landholders. Over-harvesting is an illegal practice often conducted by legal concession holder and encourages deforestation and illegal resources exportation. The areas logged are prime gorilla habitat and is considered an international concern. Companies involved in illegal exploitation therefore encourage environmental destruction in the area and fuel the illegal export industry controlled by militia groups.
Conservation
Park conservation
Most parks in the Democratic Republic of the Congo are insecure areas restricting the access of park rangers. Although park rangers are trained to stop illegal hunting, the small number of park rangers do not have access to further training or equipment to handle the militia groups. In the Virunga National Park, for example, 190 park rangers have been killed in just the past 15 years from civil war. Laws in place enforce trans-boundary collaboration and have been proven successful in reducing the decline of the eastern lowland gorilla Illegal extraction of resources from the Virunga National Park has been reduced by policing transportation across borders. This has reduced the financial input available to the militias in the region. Although park rangers have been successful in restricting the amount of illegal resources being transported out of the region, militias groups have retaliated by purposely killing a group of gorillas to threaten the park rangers. On 22 July 2007, 10 gorillas were killed in retaliation for the park rangers' interference with the exportation of illegal resources such as wood.
The militia have remained in control in the region as a result of the neighbouring countries. These militia groups trade minerals and timber illegally in exchange for arms from neighbouring countries, corrupt officials and subsidiaries of many multinational companies. Gorillas are also threatened directly by militia groups because of the prevalence of booby traps placed randomly throughout the forest. Although the eastern lowland gorilla population is directly affected by the violence of militia groups, their population is mainly endangered by habitat disruption from the extraction of natural resources.
Genetic studies
There was already evidence of inbreeding depression in some gorilla populations, evident through birth defects like syndactyly. A recent genome study, which included all four subspecies of gorilla, aimed to identify the levels of diversity and divergence among the remaining populations of gorilla. Results showed that the eastern lowland gorilla subspecies was in fact two distinct subgroups. This division could have been due to the small number of individuals sampled, or due to the social structures within the subspecies. Results suggest that within the eastern lowland gorilla subspecies, there is an extreme lack of variation, which could reduce the potential of the subspecies to undergo natural selection and adapt to their environment. This lack of diversity is thought to be due to a limited number of founders and low levels of migration, which has resulted in a high level of inbreeding in these small populations. Conservation interventions for the eastern lowland gorilla have suggested implementing captive breeding programs or translocations between the eastern lowland subgroups.
| Biology and health sciences | Apes | Animals |
2212867 | https://en.wikipedia.org/wiki/Detailed%20balance | Detailed balance | The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at equilibrium, each elementary process is in equilibrium with its reverse process.
History
The principle of detailed balance was explicitly introduced for collisions by Ludwig Boltzmann. In 1872, he proved his H-theorem using this principle. The arguments in favor of this property are founded upon microscopic reversibility.
Five years before Boltzmann, James Clerk Maxwell used the principle of detailed balance for gas kinetics with the reference to the principle of sufficient reason. He compared the idea of detailed balance with other types of balancing (like cyclic balance) and found that "Now it is impossible to assign a reason" why detailed balance should be rejected (pg. 64).
In 1901, Rudolf Wegscheider introduced the principle of detailed balance for chemical kinetics. In particular, he demonstrated that the irreversible cycles A1 -> A2 -> \cdots -> A_\mathit{n} -> A1 are impossible and found explicitly the relations between kinetic constants that follow from the principle of detailed balance. In 1931, Lars Onsager used these relations in his works, for which he was awarded the 1968 Nobel Prize in Chemistry.
Albert Einstein in 1916 used the principle of detailed balance in a background for his quantum theory of emission and absorption of radiation.
The principle of detailed balance has been used in Markov chain Monte Carlo methods since their invention in 1953. In particular, in the Metropolis–Hastings algorithm and in its important particular case, Gibbs sampling, it is used as a simple and reliable condition to provide the desirable equilibrium state.
Now, the principle of detailed balance is a standard part of the university courses in statistical mechanics, physical chemistry, chemical and physical kinetics.
Microscopic background
The microscopic "reversing of time" turns at the kinetic level into the "reversing of arrows": the elementary processes transform into their reverse processes. For example, the reaction
transforms into
and conversely. (Here, are symbols of components or states, are coefficients). The equilibrium ensemble should be invariant with respect to this transformation because of microreversibility and the uniqueness of thermodynamic equilibrium. This leads us immediately to the concept of detailed balance: each process is equilibrated by its reverse process.
This reasoning is based on three assumptions:
does not change under time reversal;
Equilibrium is invariant under time reversal;
The macroscopic elementary processes are microscopically distinguishable. That is, they represent disjoint sets of microscopic events.
Any of these assumptions may be violated. For example, Boltzmann's collision can be represented as where is a particle with velocity v. Under time reversal transforms into . Therefore, the collision is transformed into the reverse collision by the PT transformation, where P is the space inversion and T is the time reversal. Detailed balance for Boltzmann's equation requires PT-invariance of collisions' dynamics, not just T-invariance. Indeed, after the time reversal the collision transforms into For the detailed balance we need transformation into
For this purpose, we need to apply additionally the space reversal P. Therefore, for the detailed balance in Boltzmann's equation not T-invariance but PT-invariance is needed.
Equilibrium may be not T- or PT-invariant even if the laws of motion are invariant. This non-invariance may be caused by the spontaneous symmetry breaking. There exist nonreciprocal media (for example, some bi-isotropic materials) without T and PT invariance.
If different macroscopic processes are sampled from the same elementary microscopic events then macroscopic detailed balance may be violated even when microscopic detailed balance holds.
Now, after almost 150 years of development, the scope of validity and the violations of detailed balance in kinetics seem to be clear.
Detailed balance
Reversibility
A Markov process is called a reversible Markov process or reversible Markov chain if there exists a positive stationary distribution π that satisfies the detailed balance equationswhere Pij is the Markov transition probability from state i to state j, i.e. , and πi and πj are the equilibrium probabilities of being in states i and j, respectively. When for all i, this is equivalent to the joint probability matrix, being symmetric in i and j; or symmetric in and t.
The definition carries over straightforwardly to continuous variables, where π becomes a probability density, and a transition kernel probability density from state s′ to state s:The detailed balance condition is stronger than that required merely for a stationary distribution, because there are Markov processes with stationary distributions that do not have detailed balance.
Transition matrices that are symmetric or always have detailed balance. In these cases, a uniform distribution over the states is an equilibrium distribution.
Kolmogorov's criterion
Reversibility is equivalent to Kolmogorov's criterion: the product of transition rates over any closed loop of states is the same in both directions.
For example, it implies that, for all a, b and c,For example, if we have a Markov chain with three states such that only these transitions are possible: , then they violate Kolmogorov's criterion.
Closest reversible Markov chain
For continuous systems with detailed balance, it may be possible to continuously transform the coordinates until the equilibrium distribution is uniform, with a transition kernel which then is symmetric. In the case of discrete states, it may be possible to achieve something similar by breaking the Markov states into appropriately-sized degenerate sub-states.
For a Markov transition matrix and a stationary distribution, the detailed balance equations may not be valid. However, it can be shown that a unique Markov transition matrix exists which is closest according to the stationary distribution and a given norm. The closest Matrix can be computed by solving a quadratic-convex optimization problem.
Detailed balance and entropy increase
For many systems of physical and chemical kinetics, detailed balance provides sufficient conditions for the strict increase of entropy in isolated systems. For example, the famous Boltzmann H-theorem states that, according to the Boltzmann equation, the principle of detailed balance implies positivity of entropy production. The Boltzmann formula (1872) for entropy production in rarefied gas kinetics with detailed balance served as a prototype of many similar formulas for dissipation in mass action kinetics and generalized mass action kinetics with detailed balance.
Nevertheless, the principle of detailed balance is not necessary for entropy growth. For example, in the linear irreversible cycle A1 -> A2 -> A3 -> A1, entropy production is positive but the principle of detailed balance does not hold.
Thus, the principle of detailed balance is a sufficient but not necessary condition for entropy increase in Boltzmann kinetics. These relations between the principle of detailed balance and the second law of thermodynamics were clarified in 1887 when Hendrik Lorentz objected to the Boltzmann H-theorem for polyatomic gases. Lorentz stated that the principle of detailed balance is not applicable to collisions of polyatomic molecules.
Boltzmann immediately invented a new, more general condition sufficient for entropy growth. Boltzmann's condition holds for all Markov processes, irrespective of time-reversibility. Later, entropy increase was proved for all Markov processes by a direct method. These theorems may be considered as simplifications of the Boltzmann result. Later, this condition was referred to as the "cyclic balance" condition (because it holds for irreversible cycles) or the "semi-detailed balance" or the "complex balance". In 1981, Carlo Cercignani and Maria Lampis proved that the Lorentz arguments were wrong and the principle of detailed balance is valid for polyatomic molecules. Nevertheless, the extended semi-detailed balance conditions invented by Boltzmann in this discussion remain the remarkable generalization of the detailed balance.
Wegscheider's conditions for the generalized mass action law
In chemical kinetics, the elementary reactions are represented by the stoichiometric equations
where are the components and are the stoichiometric coefficients. Here, the reverse reactions with positive constants are included in the list separately. We need this separation of direct and reverse reactions to apply later the general formalism to the systems with some irreversible reactions. The system of stoichiometric equations of elementary reactions is the reaction mechanism.
The stoichiometric matrix is , (gain minus loss). This matrix need not be square. The stoichiometric vector is the rth row of with coordinates .
According to the generalized mass action law, the reaction rate for an elementary reaction is
where is the activity (the "effective concentration") of .
The reaction mechanism includes reactions with the reaction rate constants . For each r the following notations are used: ; ; is the reaction rate constant for the reverse reaction if it is in the reaction mechanism and 0 if it is not; is the reaction rate for the reverse reaction if it is in the reaction mechanism and 0 if it is not. For a reversible reaction, is the equilibrium constant.
The principle of detailed balance for the generalized mass action law is: For given values there exists a positive equilibrium that satisfies detailed balance, that is, . This means that the system of linear detailed balance equations
is solvable (). The following classical result gives the necessary and sufficient conditions for the existence of a positive equilibrium with detailed balance (see, for example, the textbook).
Two conditions are sufficient and necessary for solvability of the system of detailed balance equations:
If then and, conversely, if then (reversibility);
For any solution of the system
the Wegscheider's identity holds:
Remark. It is sufficient to use in the Wegscheider conditions a basis of solutions of the system .
In particular, for any cycle in the monomolecular (linear) reactions the product of the reaction rate constants in the clockwise direction is equal to the product of the reaction rate constants in the counterclockwise direction. The same condition is valid for the reversible Markov processes (it is equivalent to the "no net flow" condition).
A simple nonlinear example gives us a linear cycle supplemented by one nonlinear step:
A1 <=> A2
A2 <=> A3
A3 <=> A1
{A1}+A2 <=> 2A3
There are two nontrivial independent Wegscheider's identities for this system:
and
They correspond to the following linear relations between the stoichiometric vectors:
and
The computational aspect of the Wegscheider conditions was studied by D. Colquhoun with co-authors.
The Wegscheider conditions demonstrate that whereas the principle of detailed balance states a local property of equilibrium, it implies the relations between the kinetic constants that are valid for all states far from equilibrium. This is possible because a kinetic law is known and relations between the rates of the elementary processes at equilibrium can be transformed into relations between kinetic constants which are used globally. For the Wegscheider conditions this kinetic law is the law of mass action (or the generalized law of mass action).
Dissipation in systems with detailed balance
To describe dynamics of the systems that obey the generalized mass action law, one has to represent the activities as functions of the concentrations cj and temperature. For this purpose, use the representation of the activity through the chemical potential:
where μi is the chemical potential of the species under the conditions of interest, is the chemical potential of that species in the chosen standard state, R is the gas constant and T is the thermodynamic temperature.
The chemical potential can be represented as a function of c and T, where c is the vector of concentrations with components cj. For the ideal systems, and : the activity is the concentration and the generalized mass action law is the usual law of mass action.
Consider a system in isothermal (T=const) isochoric (the volume V=const) condition. For these conditions, the Helmholtz free energy measures the “useful” work obtainable from a system. It is a functions of the temperature T, the volume V and the amounts of chemical components Nj (usually measured in moles), N is the vector with components Nj. For the ideal systems,
The chemical potential is a partial derivative: .
The chemical kinetic equations are
If the principle of detailed balance is valid then for any value of T there exists a positive point of detailed balance ceq:
Elementary algebra gives
where
For the dissipation we obtain from these formulas:
The inequality holds because ln is a monotone function and, hence, the expressions and have always the same sign.
Similar inequalities are valid for other classical conditions for the closed systems and the corresponding characteristic functions: for isothermal isobaric conditions the Gibbs free energy decreases, for the isochoric systems with the constant internal energy (isolated systems) the entropy increases as well as for isobaric systems with the constant enthalpy.
Onsager reciprocal relations and detailed balance
Let the principle of detailed balance be valid. Then, for small deviations from equilibrium, the kinetic response of the system can be approximated as linearly related to its deviation from chemical equilibrium, giving the reaction rates for the generalized mass action law as:
Therefore, again in the linear response regime near equilibrium, the kinetic equations are ():
This is exactly the Onsager form: following the original work of Onsager, we should introduce the thermodynamic forces and the matrix of coefficients in the form
The coefficient matrix is symmetric:
These symmetry relations, , are exactly the Onsager reciprocal relations. The coefficient matrix is non-positive. It is negative on the linear span of the stoichiometric vectors .
So, the Onsager relations follow from the principle of detailed balance in the linear approximation near equilibrium.
Semi-detailed balance
To formulate the principle of semi-detailed balance, it is convenient to count the direct and inverse elementary reactions separately. In this case, the kinetic equations have the form:
Let us use the notations , for the input and the output vectors of the stoichiometric coefficients of the rth elementary reaction. Let be the set of all these vectors .
For each , let us define two sets of numbers:
if and only if is the vector of the input stoichiometric coefficients for the rth elementary reaction; if and only if is the vector of the output stoichiometric coefficients for the rth elementary reaction.
The principle of semi-detailed balance means that in equilibrium the semi-detailed balance condition holds: for every
The semi-detailed balance condition is sufficient for the stationarity: it implies that
For the Markov kinetics the semi-detailed balance condition is just the elementary balance equation and holds for any steady state. For the nonlinear mass action law it is, in general, sufficient but not necessary condition for stationarity.
The semi-detailed balance condition is weaker than the detailed balance one: if the principle of detailed balance holds then the condition of semi-detailed balance also holds.
For systems that obey the generalized mass action law the semi-detailed balance condition is sufficient for the dissipation inequality (for the Helmholtz free energy under isothermal isochoric conditions and for the dissipation inequalities under other classical conditions for the corresponding thermodynamic potentials).
Boltzmann introduced the semi-detailed balance condition for collisions in 1887 and proved that it guaranties the positivity of the entropy production. For chemical kinetics, this condition (as the complex balance condition) was introduced by Horn and Jackson in 1972.
The microscopic backgrounds for the semi-detailed balance were found in the Markov microkinetics of the intermediate compounds that are present in small amounts and whose concentrations are in quasiequilibrium with the main components. Under these microscopic assumptions, the semi-detailed balance condition is just the balance equation for the Markov microkinetics according to the Michaelis–Menten–Stueckelberg theorem.
Dissipation in systems with semi-detailed balance
Let us represent the generalized mass action law in the equivalent form: the rate of the elementary process
is
where is the chemical potential and is the Helmholtz free energy. The exponential term is called the Boltzmann factor and the multiplier is the kinetic factor.
Let us count the direct and reverse reaction in the kinetic equation separately:
An auxiliary function of one variable is convenient for the representation of dissipation for the mass action law
This function may be considered as the sum of the reaction rates for deformed input stoichiometric coefficients . For it is just the sum of the reaction rates. The function is convex because .
Direct calculation gives that according to the kinetic equations
This is the general dissipation formula for the generalized mass action law.
Convexity of gives the sufficient and necessary conditions for the proper dissipation inequality:
The semi-detailed balance condition can be transformed into identity . Therefore, for the systems with semi-detailed balance .
Cone theorem and local equivalence of detailed and complex balance
For any reaction mechanism and a given positive equilibrium a cone of possible velocities for the systems with detailed balance is defined for any non-equilibrium state N
where cone stands for the conical hull and the piecewise-constant functions do not depend on (positive) values of equilibrium reaction rates and are defined by thermodynamic quantities under assumption of detailed balance.
The cone theorem states that for the given reaction mechanism and given positive equilibrium, the velocity (dN/dt) at a state N for a system with complex balance belongs to the cone . That is, there exists a system with detailed balance, the same reaction mechanism, the same positive equilibrium, that gives the same velocity at state N. According to cone theorem, for a given state N, the set of velocities of the semidetailed balance systems coincides with the set of velocities of the detailed balance systems if their reaction mechanisms and equilibria coincide. This means local equivalence of detailed and complex balance.
Detailed balance for systems with irreversible reactions
Detailed balance states that in equilibrium each elementary process is equilibrated by its reverse process and requires reversibility of all elementary processes. For many real physico-chemical complex systems (e.g. homogeneous combustion, heterogeneous catalytic oxidation, most enzyme reactions etc.), detailed mechanisms include both reversible and irreversible reactions. If one represents irreversible reactions as limits of reversible steps, then it becomes obvious that not all reaction mechanisms with irreversible reactions can be obtained as limits of systems or reversible reactions with detailed balance. For example, the irreversible cycle A1 -> A2 -> A3 -> A1 cannot be obtained as such a limit but the reaction mechanism A1 -> A2 -> A3 <- A1 can.
Gorban–Yablonsky theorem. A system of reactions with some irreversible reactions is a limit of systems with detailed balance when some constants tend to zero if and only if (i) the reversible part of this system satisfies the principle of detailed balance and (ii) the convex hull of the stoichiometric vectors of the irreversible reactions has empty intersection with the linear span of the stoichiometric vectors of the reversible reactions. Physically, the last condition means that the irreversible reactions cannot be included in oriented cyclic pathways.
| Physical sciences | Statistical mechanics | Physics |
2215876 | https://en.wikipedia.org/wiki/Burrow | Burrow | A burrow is a hole or tunnel excavated into the ground by an animal to construct a space suitable for habitation or temporary refuge, or as a byproduct of locomotion. Burrows provide a form of shelter against predation and exposure to the elements, and can be found in nearly every biome and among various biological interactions. Many animal species are known to form burrows. These species range from small amphipods, to very large vertebrate species such as the polar bear. Burrows can be constructed into a wide variety of substrates and can range in complexity from a simple tube a few centimeters long to a complex network of interconnecting tunnels and chambers hundreds or thousands of meters in total length; an example of the latter level of complexity, a well-developed burrow, would be a rabbit warren.
Vertebrate burrows
A large variety of vertebrates construct or use burrows in many types of substrate; burrows can range widely in complexity. Some examples of vertebrate burrowing animals include a number of mammals, amphibians, fish (dragonet and lungfish), reptiles, and birds (including small dinosaurs). Mammals are perhaps most well known for burrowing. Mammal species such as Insectivora like the mole, and rodents like the gopher, great gerbil and groundhog are often found to form burrows. Some other mammals that are known to burrow are the platypus, pangolin, pygmy rabbit, armadillo, rat and weasel. Some rabbits, members of the family Leporidae, are well-known burrowers. Some species, such as the groundhog, can construct burrows that occupy a full cubic metre, displacing about of dirt. There is evidence that rodents may construct the most complex burrows of all vertebrate burrowing species. For example, great gerbils live in family groups in extensive burrows, which can be seen on satellite images. Even the unoccupied burrows can remain visible in the landscape for years. The burrows are distributed regularly, although the occupied burrows appear to be clustered in space. Even Carnivora like the meerkat, and marsupials, such as wombats are burrowers. Wombat burrows are large and some have been mapped using a drone. The largest burrowing animal is probably the polar bear when it makes its maternity den in snow or earth. Lizards are also known to construct and live in burrows, and may exhibit territorial behaviour over the burrows as well. There is also evidence that a burrow provides protection for the Adelaide pygmy blue-tongue skink (Tiliqua adelaidensis) when fighting, as they may fight from inside their burrows.
Burrows by birds are usually made in soft soils; some penguins and other pelagic seabirds are noted for such burrows. The Magellanic penguin is an example, constructing burrows along coastal Patagonian regions of Chile and Argentina. Other burrowing birds are puffins, kingfishers, and bee-eaters.
Kangaroo mice construct burrows in fine sand.
Invertebrate burrows
Scabies mites construct their burrows in the skin of the infested animal or human. Termites and some wasps construct burrows in the soil and wood. Ants construct burrows in the soil. Some sea urchins and clams can burrow into rock.
The burrows produced by invertebrate animals can be filled actively or passively. Dwelling burrows which remain open during the occupation by an organism are filled passively, by gravity rather than by the organism. Actively filled burrows, on the other hand, are filled with material by the burrowing organism itself.
The establishment of an invertebrate burrow often involves the soaking of surrounding sediment in mucus to prevent collapse and to seal off water flow.
Examples of burrowing invertebrates are insects, spiders, sea urchins, crustaceans, clams and worms.
Excavators, modifiers, and occupants
Burrowing animals can be divided into three categories: primary excavators, secondary modifiers and simple occupants. Primary excavators are the animals that originally dig and construct the burrow, and are generally very strong. Some animals considered to be primary excavators are the prairie dog, aardvark and wombat. Pygmy gerbils are an example of secondary modifiers, as they do not build an original burrow, but will live inside a burrow made by other animals and improve or change some aspects of the burrow for their own purpose. The third category, simple occupants, neither build nor modify the burrow but simply live inside or use it for their own purpose. Some species of bird make use of burrows built by tortoises, which is an example of simple occupancy. These animals can also be referred to as commensals.
Protection
Some species may spend the majority of their days inside a burrow, indicating it must have good conditions and provide some benefit to the animal. Burrows may be used by certain species as protection from harsh conditions, or from predators. Burrows may be found facing the direction of sunlight or away from the direction of cold wind. This could help with heat retention and insulation, providing protection from temperatures and conditions outside. Insects such as the earwig may construct burrows to live in during winter, and use them for physical protection. Some species will also use burrows to store and protect food. This provides a benefit to the animal as it can keep food away from other competition. It also allows the animal to keep a good stock of food inside the burrow to avoid extreme weather conditions or seasons where certain food sources may be unavailable. Additionally, burrows can protect animals that have just had their young, providing good conditions and safety for vulnerable newborn animals. Burrows may also provide shelter to animals residing in areas frequently destroyed by fire, as animals deep underground in a burrow may be kept dry, safe and at a stable temperature.
Fossil burrows
Burrows are also commonly preserved in the fossil record as burrow fossils, a type of trace fossil.
| Biology and health sciences | Shelters and structures | Animals |
2216445 | https://en.wikipedia.org/wiki/Three-field%20system | Three-field system | The three-field system is a regime of crop rotation in which a field is planted with one set of crops one year, a different set in the second year, and left fallow in the third year. A set of crops is rotated from one field to another. The technique was first used in China in the Eastern Zhou period, and was adopted in Europe in the medieval period.
The three-field system lets farmers plant more crops and therefore increase production. Under this system, the arable land of an estate or village was divided into three large fields: one was planted in the autumn with winter wheat or rye; the second field was planted with other crops such as peas, lentils, or beans; and the third was left fallow (unplanted). Cereal crops deplete the ground of nitrogen, but legumes can fix nitrogen and so fertilize the soil. The fallow fields were soon overgrown with weeds and used for grazing farm animals. Their excrement fertilized that field's soil to regain its nutrients. Crop assignments were rotated every year, so each field segment would be planted for two out of every three years.
Previously a two-field system had been in place, with half the land being left fallow. In Europe, the change to a three-field system happened from the 9th century to the 11th century. With more crops available to sell and agriculture dominating the economy at the time, the three-field system created a significant surplus and increased economic prosperity.
The three-field system needed more plowing of land and its introduction coincided with the adoption of the moldboard plow. These parallel developments complemented each other and increased agricultural productivity. The legume crop needed summer rain to succeed, and so the three-field system was less successful around the Mediterranean. Oats for horse food could also be planted in the spring, which, combined with the adoption of horse collars and horseshoes, led to the replacement of oxen by horses for many farming tasks, with an associated increase in agricultural productivity and the nutrition available to the population.
In his 1769 work Lehre vom Gyps als vorzueglich guten Dung zu allen Erd-Gewaechsen auf Aeckern und Wiesen, Hopfen- und Weinbergen, Johann Friedrich Mayer was one of the first Germans to advocate for new ways of expanding beyond the medieval three-field.
| Technology | Soil and soil management | null |
1010280 | https://en.wikipedia.org/wiki/File%20system | File system | In computing, a file system or filesystem (often abbreviated to FS or fs) governs file organization and access. A local file system is a capability of an operating system that services the applications running on the same computer. A distributed file system is a protocol that provides file access between networked computers.
A file system provides a data storage service that allows applications to share mass storage. Without a file system, applications could access the storage in incompatible ways that lead to resource contention, data corruption and data loss.
There are many file system designs and implementations with various structure and features and various resulting characteristics such as speed, flexibility, security, size and more.
Files systems have been developed for many types of storage devices, including hard disk drives (HDDs), solid-state drives (SSDs), magnetic tapes and optical discs.
A portion of the computer main memory can be set up as a RAM disk that serves as a storage device for a file system. File systems such as tmpfs can store files in virtual memory.
A virtual file system provides access to files that are either computed on request, called virtual files (see procfs and sysfs), or are mapping into another, backing storage.
Etymology
From and before the advent of computers the terms file system, filing system and system for filing were used to describe methods of organizing, storing and retrieving paper documents. By 1961, the term file system was being applied to computerized filing alongside the original meaning. By 1964, it was in general use.
Architecture
A local file system's architecture can be described as layers of abstraction even though a particular file system design may not actually separate the concepts.
The logical file system layer provides relatively high-level access via an application programming interface (API) for file operations including open, close, read and write delegating operations to lower layers. This layer manages open file table entries and per-process file descriptors. It provides file access, directory operations, security and protection.
The virtual file system, an optional layer, supports multiple concurrent instances of physical file systems, each of which called a file system implementation.
The physical file system layer provides relatively low-level access to a storage device (e.g. disk). It reads and writes data blocks, provides buffering and other memory management and controls placement of blocks in specific locations on the storage medium. This layer uses device drivers or channel I/O to drive the storage device.
Attributes
File names
A file name, or filename, identifies a file to consuming applications and in some cases users.
A file name is unique so that an application can refer to exactly one file for a particular name. If the file system supports directories, then generally file name uniqueness is enforced within the context of each directory. In other words, a storage can contain multiple files with the same name, but not in the same directory.
Most file systems restrict the length of a file name.
Some file systems match file names as case sensitive and others as case insensitive. For example, the names MYFILE and myfile match the same file for case insensitive, but different files for case sensitive.
Most modern file systems allow a file name to contain a wide range of characters from the Unicode character set. Some restrict characters such as those used to indicate special attributes such as a device, device type, directory prefix, file path separator, or file type.
Directories
File systems typically support organizing files into directories, also called folders, which segregate files into groups.
This may be implemented by associating the file name with an index in a table of contents or an inode in a Unix-like file system.
Directory structures may be flat (i.e. linear), or allow hierarchies by allowing a directory to contain directories, called subdirectories.
The first file system to support arbitrary hierarchies of directories was used in the Multics operating system. The native file systems of Unix-like systems also support arbitrary directory hierarchies, as do, Apple's Hierarchical File System and its successor HFS+ in classic Mac OS, the FAT file system in MS-DOS 2.0 and later versions of MS-DOS and in Microsoft Windows, the NTFS file system in the Windows NT family of operating systems, and the ODS-2 (On-Disk Structure-2) and higher levels of the Files-11 file system in OpenVMS.
Metadata
In addition to data, the file content, a file system also manages associated metadata which may include but is not limited to:
name
size which may be stored as the number of blocks allocated or as a byte count
when created, last accessed, last backed-up
owner user and group
access permissions
file attributes such as whether the file is read-only, executable, etc.
device type (e.g. block, character, socket, subdirectory, etc.)
A file system stores associated metadata separate from the content of the file.
Most file systems store the names of all the files in one directory in one place—the directory table for that directory—which is often stored like any other file.
Many file systems put only some of the metadata for a file in the directory table, and the rest of the metadata for that file in a completely separate structure, such as the inode.
Most file systems also store metadata not associated with any one particular file.
Such metadata includes information about unused regions—free space bitmap, block availability map—and information about bad sectors.
Often such information about an allocation group is stored inside the allocation group itself.
Additional attributes can be associated on file systems, such as NTFS, XFS, ext2, ext3, some versions of UFS, and HFS+, using extended file attributes. Some file systems provide for user defined attributes such as the author of the document, the character encoding of a document or the size of an image.
Some file systems allow for different data collections to be associated with one file name. These separate collections may be referred to as streams or forks. Apple has long used a forked file system on the Macintosh, and Microsoft supports streams in NTFS. Some file systems maintain multiple past revisions of a file under a single file name; the file name by itself retrieves the most recent version, while prior saved version can be accessed using a special naming convention such as "filename;4" or "filename(-4)" to access the version four saves ago.
See comparison of file systems § Metadata for details on which file systems support which kinds of metadata.
Storage space organization
A local file system tracks which areas of storage belong to which file and which are not being used.
When a file system creates a file, it allocates space for data. Some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows.
To delete a file, the file system records that the file's space is free; available to use for another file.
A local file system manages storage space to provide a level of reliability and efficiency. Generally, it allocates storage device space in a granular manner, usually multiple physical units (i.e. bytes). For example, in Apple DOS of the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used a track/sector map.
The granular nature results in unused space, sometimes called slack space, for each file except for those that have the rare size that is a multiple of the granular allocation. For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB.
Generally, the allocation unit size is set when the storage is configured.
Choosing a relatively small size compared to the files stored, results in excessive access overhead.
Choosing a relatively large size results in excessive unused space.
Choosing an allocation size based on the average size of files expected to be in the storage tends to minimize unusable space.
Fragmentation
As a file system creates, modifies and deletes files, the underlying storage representation may become fragmented. Files and the unused space between files will occupy allocation blocks that are not contiguous.
A file becomes fragmented if space needed to store its content cannot be allocated in contiguous blocks. Free space becomes fragmented when files are deleted.
This is invisible to the end user and the system still works correctly. However this can degrade performance on some storage hardware that work better with contiguous blocks such as hard disk drives. Other hardware such as solid-state drives are not affected by fragmentation.
Access control
A file system often supports access control of data that it manages.
The intent of access control is often to prevent certain users from reading or modifying certain files.
Access control can also restrict access by program in order to ensure that data is modified in a controlled way. Examples include passwords stored in the metadata of the file or elsewhere and file permissions in the form of permission bits, access control lists, or capabilities. The need for file system utilities to be able to access the data at the media level to reorganize the structures and provide efficient backup usually means that these are only effective for polite users but are not effective against intruders.
Methods for encrypting file data are sometimes included in the file system. This is very effective since there is no need for file system utilities to know the encryption seed to effectively manage the data. The risks of relying on encryption include the fact that an attacker can copy the data and use brute force to decrypt the data. Additionally, losing the seed means losing the data.
Storage quota
Some operating systems allow a system administrator to enable disk quotas to limit a user's use of storage space.
Data integrity
A file system typically ensures that stored data remains consistent in both normal operations as well as exceptional situations like:
accessing program neglects to inform the file system that it has completed file access (to close a file)
accessing program terminates abnormally (crashes)
media failure
loss of connection to remote systems
operating system failure
system reset (soft reboot)
power failure (hard reboot)
Recovery from exceptional situations may include updating metadata, directory entries and handling data that was buffered but not written to storage media.
Recording
A file system might record events to allow analysis of issues such as:
file or systemic problems and performance
nefarious access
Data access
Byte stream access
Many file systems access data as a stream of bytes. Typically, to read file data, a program provides a memory buffer and the file system retrieves data from the medium and then writes the data to the buffer. A write involves the program providing a buffer of bytes that the file system reads and then stores to the medium.
Record access
Some file systems, or layers on top of a file system, allow a program to define a record so that a program can read and write data as a structure; not an unorganized sequence of bytes.
If a fixed length record definition is used, then locating the nth record can be calculated mathematically, which is relatively fast compared to parsing the data for record separators.
An identification for each record, also known as a key, allows a program to read, write and update records without regard to their location in storage. Such storage requires managing blocks of media, usually separating key blocks and data blocks. Efficient algorithms can be developed with pyramid structures for locating records.
Utilities
Typically, a file system can be managed by the user via various utility programs.
Some utilities allow the user to create, configure and remove an instance of a file system. It may allow extending or truncating the space allocated to the file system.
Directory utilities may be used to create, rename and delete directory entries, which are also known as dentries (singular: dentry), and to alter metadata associated with a directory. Directory utilities may also include capabilities to create additional links to a directory (hard links in Unix), to rename parent links (".." in Unix-like operating systems), and to create bidirectional links to files.
File utilities create, list, copy, move and delete files, and alter metadata. They may be able to truncate data, truncate or extend space allocation, append to, move, and modify files in-place. Depending on the underlying structure of the file system, they may provide a mechanism to prepend to or truncate from the beginning of a file, insert entries into the middle of a file, or delete entries from a file. Utilities to free space for deleted files, if the file system provides an undelete function, also belong to this category.
Some file systems defer operations such as reorganization of free space, secure erasing of free space, and rebuilding of hierarchical structures by providing utilities to perform these functions at times of minimal activity. An example is the file system defragmentation utilities.
Some of the most important features of file system utilities are supervisory activities which may involve bypassing ownership or direct access to the underlying device. These include high-performance backup and recovery, data replication, and reorganization of various data structures and allocation tables within the file system.
File system API
Utilities, libraries and programs use file system APIs to make requests of the file system. These include data transfer, positioning, updating metadata, managing directories, managing access specifications, and removal.
Multiple file systems within a single system
Frequently, retail systems are configured with a single file system occupying the entire storage device.
Another approach is to partition the disk so that several file systems with different attributes can be used. One file system, for use as browser cache or email storage, might be configured with a small allocation size. This keeps the activity of creating and deleting files typical of browser activity in a narrow area of the disk where it will not interfere with other file allocations. Another partition might be created for the storage of audio or video files with a relatively large block size. Yet another may normally be set read-only and only periodically be set writable. Some file systems, such as ZFS and APFS, support multiple file systems sharing a common pool of free blocks, supporting several file systems with different attributes without having to reserved a fixed amount of space for each file system.
A third approach, which is mostly used in cloud systems, is to use "disk images" to house additional file systems, with the same attributes or not, within another (host) file system as a file. A common example is virtualization: one user can run an experimental Linux distribution (using the ext4 file system) in a virtual machine under his/her production Windows environment (using NTFS). The ext4 file system resides in a disk image, which is treated as a file (or multiple files, depending on the hypervisor and settings) in the NTFS host file system.
Having multiple file systems on a single system has the additional benefit that in the event of a corruption of a single file system, the remaining file systems will frequently still be intact. This includes virus destruction of the system file system or even a system that will not boot. File system utilities which require dedicated access can be effectively completed piecemeal. In addition, defragmentation may be more effective. Several system maintenance utilities, such as virus scans and backups, can also be processed in segments. For example, it is not necessary to backup the file system containing videos along with all the other files if none have been added since the last backup. As for the image files, one can easily "spin off" differential images which contain only "new" data written to the master (original) image. Differential images can be used for both safety concerns (as a "disposable" system - can be quickly restored if destroyed or contaminated by a virus, as the old image can be removed and a new image can be created in matter of seconds, even without automated procedures) and quick virtual machine deployment (since the differential images can be quickly spawned using a script in batches).
Types
Disk file systems
A disk file system takes advantages of the ability of disk storage media to randomly address data in a short amount of time. Additional considerations include the speed of accessing data following that initially requested and the anticipation that the following data may also be requested. This permits multiple users (or processes) access to various data on the disk without regard to the sequential location of the data. Examples include FAT (FAT12, FAT16, FAT32), exFAT, NTFS, ReFS, HFS and HFS+, HPFS, APFS, UFS, ext2, ext3, ext4, XFS, btrfs, Files-11, Veritas File System, VMFS, ZFS, ReiserFS, NSS and ScoutFS. Some disk file systems are journaling file systems or versioning file systems.
Optical discs
ISO 9660 and Universal Disk Format (UDF) are two common formats that target Compact Discs, DVDs and Blu-ray discs. Mount Rainier is an extension to UDF supported since 2.6 series of the Linux kernel and since Windows Vista that facilitates rewriting to DVDs.
Flash file systems
A flash file system considers the special abilities, performance and restrictions of flash memory devices. Frequently a disk file system can use a flash memory device as the underlying storage media, but it is much better to use a file system specifically designed for a flash device.
Tape file systems
A tape file system is a file system and tape format designed to store files on tape. Magnetic tapes are sequential storage media with significantly longer random data access times than disks, posing challenges to the creation and efficient management of a general-purpose file system.
In a disk file system there is typically a master file directory, and a map of used and free data regions. Any file additions, changes, or removals require updating the directory and the used/free maps. Random access to data regions is measured in milliseconds so this system works well for disks.
Tape requires linear motion to wind and unwind potentially very long reels of media. This tape motion may take several seconds to several minutes to move the read/write head from one end of the tape to the other.
Consequently, a master file directory and usage map can be extremely slow and inefficient with tape. Writing typically involves reading the block usage map to find free blocks for writing, updating the usage map and directory to add the data, and then advancing the tape to write the data in the correct spot. Each additional file write requires updating the map and directory and writing the data, which may take several seconds to occur for each file.
Tape file systems instead typically allow for the file directory to be spread across the tape intermixed with the data, referred to as streaming, so that time-consuming and repeated tape motions are not required to write new data.
However, a side effect of this design is that reading the file directory of a tape usually requires scanning the entire tape to read all the scattered directory entries. Most data archiving software that works with tape storage will store a local copy of the tape catalog on a disk file system, so that adding files to a tape can be done quickly without having to rescan the tape media. The local tape catalog copy is usually discarded if not used for a specified period of time, at which point the tape must be re-scanned if it is to be used in the future.
IBM has developed a file system for tape called the Linear Tape File System. The IBM implementation of this file system has been released as the open-source IBM Linear Tape File System — Single Drive Edition (LTFS-SDE) product. The Linear Tape File System uses a separate partition on the tape to record the index meta-data, thereby avoiding the problems associated with scattering directory entries across the entire tape.
Tape formatting
Writing data to a tape, erasing, or formatting a tape is often a significantly time-consuming process and can take several hours on large tapes. With many data tape technologies it is not necessary to format the tape before over-writing new data to the tape. This is due to the inherently destructive nature of overwriting data on sequential media.
Because of the time it can take to format a tape, typically tapes are pre-formatted so that the tape user does not need to spend time preparing each new tape for use. All that is usually necessary is to write an identifying media label to the tape before use, and even this can be automatically written by software when a new tape is used for the first time.
Database file systems
Another concept for file management is the idea of a database-based file system. Instead of, or in addition to, hierarchical structured management, files are identified by their characteristics, like type of file, topic, author, or similar rich metadata.
IBM DB2 for i (formerly known as DB2/400 and DB2 for i5/OS) is a database file system as part of the object based IBM i operating system (formerly known as OS/400 and i5/OS), incorporating a single level store and running on IBM Power Systems (formerly known as AS/400 and iSeries), designed by Frank G. Soltis IBM's former chief scientist for IBM i. Around 1978 to 1988 Frank G. Soltis and his team at IBM Rochester had successfully designed and applied technologies like the database file system where others like Microsoft later failed to accomplish. These technologies are informally known as 'Fortress Rochester' and were in few basic aspects extended from early Mainframe technologies but in many ways more advanced from a technological perspective.
Some other projects that are not "pure" database file systems but that use some aspects of a database file system:
Many Web content management systems use a relational DBMS to store and retrieve files. For example, XHTML files are stored as XML or text fields, while image files are stored as blob fields; SQL SELECT (with optional XPath) statements retrieve the files, and allow the use of a sophisticated logic and more rich information associations than "usual file systems." Many CMSs also have the option of storing only metadata within the database, with the standard filesystem used to store the content of files.
Very large file systems, embodied by applications like Apache Hadoop and Google File System, use some database file system concepts.
Transactional file systems
Some programs need to either make multiple file system changes, or, if one or more of the changes fail for any reason, make none of the changes. For example, a program which is installing or updating software may write executables, libraries, and/or configuration files. If some of the writing fails and the software is left partially installed or updated, the software may be broken or unusable. An incomplete update of a key system utility, such as the command shell, may leave the entire system in an unusable state.
Transaction processing introduces the atomicity guarantee, ensuring that operations inside of a transaction are either all committed or the transaction can be aborted and the system discards all of its partial results. This means that if there is a crash or power failure, after recovery, the stored state will be consistent. Either the software will be completely installed or the failed installation will be completely rolled back, but an unusable partial install will not be left on the system. Transactions also provide the isolation guarantee, meaning that operations within a transaction are hidden from other threads on the system until the transaction commits, and that interfering operations on the system will be properly serialized with the transaction.
Windows, beginning with Vista, added transaction support to NTFS, in a feature called Transactional NTFS, but its use is now discouraged. There are a number of research prototypes of transactional file systems for UNIX systems, including the Valor file system, Amino, LFS, and a transactional ext3 file system on the TxOS kernel, as well as transactional file systems targeting embedded systems, such as TFFS.
Ensuring consistency across multiple file system operations is difficult, if not impossible, without file system transactions. File locking can be used as a concurrency control mechanism for individual files, but it typically does not protect the directory structure or file metadata. For instance, file locking cannot prevent TOCTTOU race conditions on symbolic links.
File locking also cannot automatically roll back a failed operation, such as a software upgrade; this requires atomicity.
Journaling file systems is one technique used to introduce transaction-level consistency to file system structures. Journal transactions are not exposed to programs as part of the OS API; they are only used internally to ensure consistency at the granularity of a single system call.
Data backup systems typically do not provide support for direct backup of data stored in a transactional manner, which makes the recovery of reliable and consistent data sets difficult. Most backup software simply notes what files have changed since a certain time, regardless of the transactional state shared across multiple files in the overall dataset. As a workaround, some database systems simply produce an archived state file containing all data up to that point, and the backup software only backs that up and does not interact directly with the active transactional databases at all. Recovery requires separate recreation of the database from the state file after the file has been restored by the backup software.
Network file systems
A network file system is a file system that acts as a client for a remote file access protocol, providing access to files on a server. Programs using local interfaces can transparently create, manage and access hierarchical directories and files in remote network-connected computers. Examples of network file systems include clients for the NFS, AFS, SMB protocols, and file-system-like clients for FTP and WebDAV.
Shared disk file systems
A shared disk file system is one in which a number of machines (usually servers) all have access to the same external disk subsystem (usually a storage area network). The file system arbitrates access to that subsystem, preventing write collisions. Examples include GFS2 from Red Hat, GPFS, now known as Spectrum Scale, from IBM, SFS from DataPlow, CXFS from SGI, StorNext from Quantum Corporation and ScoutFS from Versity.
Special file systems
Some file systems expose elements of the operating system as files so they can be acted on via the file system API. This is common in Unix-like operating systems, and to a lesser extent in other operating systems. Examples include:
devfs, udev, TOPS-10 expose I/O devices or pseudo-devices as special files
configfs and sysfs expose special files that can be used to query and configure Linux kernel information
procfs exposes process information as special files
Minimal file system / audio-cassette storage
In the 1970s disk and digital tape devices were too expensive for some early microcomputer users. An inexpensive basic data storage system was devised that used common audio cassette tape.
When the system needed to write data, the user was notified to press "RECORD" on the cassette recorder, then press "RETURN" on the keyboard to notify the system that the cassette recorder was recording. The system wrote a sound to provide time synchronization, then modulated sounds that encoded a prefix, the data, a checksum and a suffix. When the system needed to read data, the user was instructed to press "PLAY" on the cassette recorder. The system would listen to the sounds on the tape waiting until a burst of sound could be recognized as the synchronization. The system would then interpret subsequent sounds as data. When the data read was complete, the system would notify the user to press "STOP" on the cassette recorder. It was primitive, but it (mostly) worked. Data was stored sequentially, usually in an unnamed format, although some systems (such as the Commodore PET series of computers) did allow the files to be named. Multiple sets of data could be written and located by fast-forwarding the tape and observing at the tape counter to find the approximate start of the next data region on the tape. The user might have to listen to the sounds to find the right spot to begin playing the next data region. Some implementations even included audible sounds interspersed with the data.
Flat file systems
In a flat file system, there are no subdirectories; directory entries for all files are stored in a single directory.
When floppy disk media was first available this type of file system was adequate due to the relatively small amount of data space available. CP/M machines featured a flat file system, where files could be assigned to one of 16 user areas and generic file operations narrowed to work on one instead of defaulting to work on all of them. These user areas were no more than special attributes associated with the files; that is, it was not necessary to define specific quota for each of these areas and files could be added to groups for as long as there was still free storage space on the disk. The early Apple Macintosh also featured a flat file system, the Macintosh File System. It was unusual in that the file management program (Macintosh Finder) created the illusion of a partially hierarchical filing system on top of EMFS. This structure required every file to have a unique name, even if it appeared to be in a separate folder. IBM DOS/360 and OS/360 store entries for all files on a disk pack (volume) in a directory on the pack called a Volume Table of Contents (VTOC).
While simple, flat file systems become awkward as the number of files grows and makes it difficult to organize data into related groups of files.
A recent addition to the flat file system family is Amazon's S3, a remote storage service, which is intentionally simplistic to allow users the ability to customize how their data is stored. The only constructs are buckets (imagine a disk drive of unlimited size) and objects (similar, but not identical to the standard concept of a file). Advanced file management is allowed by being able to use nearly any character (including '/') in the object's name, and the ability to select subsets of the bucket's content based on identical prefixes.
Implementations
An operating system (OS) typically supports one or more file systems. Sometimes an OS and its file system are so tightly interwoven that it is difficult to describe them independently.
An OS typically provides file system access to the user. Often an OS provides command line interface, such as Unix shell, Windows Command Prompt and PowerShell, and OpenVMS DCL. An OS often also provides graphical user interface file browsers such as MacOS Finder and Windows File Explorer.
Unix and Unix-like operating systems
Unix-like operating systems create a virtual file system, which makes all the files on all the devices appear to exist in a single hierarchy. This means, in those systems, there is one root directory, and every file existing on the system is located under it somewhere. Unix-like systems can use a RAM disk or network shared resource as its root directory.
Unix-like systems assign a device name to each device, but this is not how the files on that device are accessed. Instead, to gain access to files on another device, the operating system must first be informed where in the directory tree those files should appear. This process is called mounting a file system. For example, to access the files on a CD-ROM, one must tell the operating system "Take the file system from this CD-ROM and make it appear under such-and-such directory." The directory given to the operating system is called the mount point – it might, for example, be . The directory exists on many Unix systems (as specified in the Filesystem Hierarchy Standard) and is intended specifically for use as a mount point for removable media such as CDs, DVDs, USB drives or floppy disks. It may be empty, or it may contain subdirectories for mounting individual devices. Generally, only the administrator (i.e. root user) may authorize the mounting of file systems.
Unix-like operating systems often include software and tools that assist in the mounting process and provide it new functionality. Some of these strategies have been coined "auto-mounting" as a reflection of their purpose.
In many situations, file systems other than the root need to be available as soon as the operating system has booted. All Unix-like systems therefore provide a facility for mounting file systems at boot time. System administrators define these file systems in the configuration file fstab (vfstab in Solaris), which also indicates options and mount points.
In some situations, there is no need to mount certain file systems at boot time, although their use may be desired thereafter. There are some utilities for Unix-like systems that allow the mounting of predefined file systems upon demand.
Removable media allow programs and data to be transferred between machines without a physical connection. Common examples include USB flash drives, CD-ROMs, and DVDs. Utilities have therefore been developed to detect the presence and availability of a medium and then mount that medium without any user intervention.
Progressive Unix-like systems have also introduced a concept called supermounting; see, for example, the Linux supermount-ng project. For example, a floppy disk that has been supermounted can be physically removed from the system. Under normal circumstances, the disk should have been synchronized and then unmounted before its removal. Provided synchronization has occurred, a different disk can be inserted into the drive. The system automatically notices that the disk has changed and updates the mount point contents to reflect the new medium.
An automounter will automatically mount a file system when a reference is made to the directory atop which it should be mounted. This is usually used for file systems on network servers, rather than relying on events such as the insertion of media, as would be appropriate for removable media.
Linux
Linux supports numerous file systems, but common choices for the system disk on a block device include the ext* family (ext2, ext3 and ext4), XFS, JFS, and btrfs. For raw flash without a flash translation layer (FTL) or Memory Technology Device (MTD), there are UBIFS, JFFS2 and YAFFS, among others. SquashFS is a common compressed read-only file system.
Solaris
Solaris in earlier releases defaulted to (non-journaled or non-logging) UFS for bootable and supplementary file systems. Solaris defaulted to, supported, and extended UFS.
Support for other file systems and significant enhancements were added over time, including Veritas Software Corp. (journaling) VxFS, Sun Microsystems (clustering) QFS, Sun Microsystems (journaling) UFS, and Sun Microsystems (open source, poolable, 128 bit compressible, and error-correcting) ZFS.
Kernel extensions were added to Solaris to allow for bootable Veritas VxFS operation. Logging or journaling was added to UFS in Sun's Solaris 7. Releases of Solaris 10, Solaris Express, OpenSolaris, and other open source variants of the Solaris operating system later supported bootable ZFS.
Logical Volume Management allows for spanning a file system across multiple devices for the purpose of adding redundancy, capacity, and/or throughput. Legacy environments in Solaris may use Solaris Volume Manager (formerly known as Solstice DiskSuite). Multiple operating systems (including Solaris) may use Veritas Volume Manager. Modern Solaris based operating systems eclipse the need for volume management through leveraging virtual storage pools in ZFS.
macOS
macOS (formerly Mac OS X) uses the Apple File System (APFS), which in 2017 replaced a file system inherited from classic Mac OS called HFS Plus (HFS+). Apple also uses the term "Mac OS Extended" for HFS+. HFS Plus is a metadata-rich and case-preserving but (usually) case-insensitive file system. Due to the Unix roots of macOS, Unix permissions were added to HFS Plus. Later versions of HFS Plus added journaling to prevent corruption of the file system structure and introduced a number of optimizations to the allocation algorithms in an attempt to defragment files automatically without requiring an external defragmenter.
File names can be up to 255 characters. HFS Plus uses Unicode to store file names. On macOS, the filetype can come from the type code, stored in file's metadata, or the filename extension.
HFS Plus has three kinds of links: Unix-style hard links, Unix-style symbolic links, and aliases. Aliases are designed to maintain a link to their original file even if they are moved or renamed; they are not interpreted by the file system itself, but by the File Manager code in userland.
macOS 10.13 High Sierra, which was announced on June 5, 2017, at Apple's WWDC event, uses the Apple File System on solid-state drives.
macOS also supported the UFS file system, derived from the BSD Unix Fast File System via NeXTSTEP. However, as of Mac OS X Leopard, macOS could no longer be installed on a UFS volume, nor can a pre-Leopard system installed on a UFS volume be upgraded to Leopard. As of Mac OS X Lion UFS support was completely dropped.
Newer versions of macOS are capable of reading and writing to the legacy FAT file systems (16 and 32) common on Windows. They are also capable of reading the newer NTFS file systems for Windows. In order to write to NTFS file systems on macOS versions prior to Mac OS X Snow Leopard third-party software is necessary. Mac OS X 10.6 (Snow Leopard) and later allow writing to NTFS file systems, but only after a non-trivial system setting change (third-party software exists that automates this).
Finally, macOS supports reading and writing of the exFAT file system since Mac OS X Snow Leopard, starting from version 10.6.5.
OS/2
OS/2 1.2 introduced the High Performance File System (HPFS). HPFS supports mixed case file names in different code pages, long file names (255 characters), more efficient use of disk space, an architecture that keeps related items close to each other on the disk volume, less fragmentation of data, extent-based space allocation, a B+ tree structure for directories, and the root directory located at the midpoint of the disk, for faster average access. A journaled filesystem (JFS) was shipped in 1999.
PC-BSD
PC-BSD is a desktop version of FreeBSD, which inherits FreeBSD's ZFS support, similarly to FreeNAS. The new graphical installer of PC-BSD can handle / (root) on ZFS and RAID-Z pool installs and disk encryption using Geli right from the start in an easy convenient (GUI) way. The current PC-BSD 9.0+ 'Isotope Edition' has ZFS filesystem version 5 and ZFS storage pool version 28.
Plan 9
Plan 9 from Bell Labs treats everything as a file and accesses all objects as a file would be accessed (i.e., there is no ioctl or mmap): networking, graphics, debugging, authentication, capabilities, encryption, and other services are accessed via I/O operations on file descriptors. The 9P protocol removes the difference between local and remote files. File systems in Plan 9 are organized with the help of private, per-process namespaces, allowing each process to have a different view of the many file systems that provide resources in a distributed system.
The Inferno operating system shares these concepts with Plan 9.
Microsoft Windows
Windows makes use of the FAT, NTFS, exFAT, Live File System and ReFS file systems (the last of these is only supported and usable in Windows Server 2012, Windows Server 2016, Windows 8, Windows 8.1, and Windows 10; Windows cannot boot from it).
Windows uses a drive letter abstraction at the user level to distinguish one disk or partition from another. For example, the path represents a directory on the partition represented by the letter C. Drive C: is most commonly used for the primary hard disk drive partition, on which Windows is usually installed and from which it boots. This "tradition" has become so firmly ingrained that bugs exist in many applications which make assumptions that the drive that the operating system is installed on is C. The use of drive letters, and the tradition of using "C" as the drive letter for the primary hard disk drive partition, can be traced to MS-DOS, where the letters A and B were reserved for up to two floppy disk drives. This in turn derived from CP/M in the 1970s, and ultimately from IBM's CP/CMS of 1967.
FAT
The family of FAT file systems is supported by almost all operating systems for personal computers, including all versions of Windows and MS-DOS/PC DOS, OS/2, and DR-DOS. (PC DOS is an OEM version of MS-DOS, MS-DOS was originally based on SCP's 86-DOS. DR-DOS was based on Digital Research's Concurrent DOS, a successor of CP/M-86.) The FAT file systems are therefore well-suited as a universal exchange format between computers and devices of most any type and age.
The FAT file system traces its roots back to an (incompatible) 8-bit FAT precursor in Standalone Disk BASIC and the short-lived MDOS/MIDAS project.
Over the years, the file system has been expanded from FAT12 to FAT16 and FAT32. Various features have been added to the file system including subdirectories, codepage support, extended attributes, and long filenames. Third parties such as Digital Research have incorporated optional support for deletion tracking, and volume/directory/file-based multi-user security schemes to support file and directory passwords and permissions such as read/write/execute/delete access rights. Most of these extensions are not supported by Windows.
The FAT12 and FAT16 file systems had a limit on the number of entries in the root directory of the file system and had restrictions on the maximum size of FAT-formatted disks or partitions.
FAT32 addresses the limitations in FAT12 and FAT16, except for the file size limit of close to 4 GB, but it remains limited compared to NTFS.
FAT12, FAT16 and FAT32 also have a limit of eight characters for the file name, and three characters for the extension (such as .exe). This is commonly referred to as the 8.3 filename limit. VFAT, an optional extension to FAT12, FAT16 and FAT32, introduced in Windows 95 and Windows NT 3.5, allowed long file names (LFN) to be stored in the FAT file system in a backwards compatible fashion.
NTFS
NTFS, introduced with the Windows NT operating system in 1993, allowed ACL-based permission control. Other features also supported by NTFS include hard links, multiple file streams, attribute indexing, quota tracking, sparse files, encryption, compression, and reparse points (directories working as mount-points for other file systems, symlinks, junctions, remote storage links).
exFAT
exFAT has certain advantages over NTFS with regard to file system overhead.
exFAT is not backward compatible with FAT file systems such as FAT12, FAT16 or FAT32. The file system is supported with newer Windows systems, such as Windows XP, Windows Server 2003, Windows Vista, Windows 2008, Windows 7, Windows 8, Windows 8.1, Windows 10 and Windows 11.
exFAT is supported in macOS starting with version 10.6.5 (Snow Leopard). Support in other operating systems is sparse since implementing support for exFAT requires a license. exFAT is the only file system that is fully supported on both macOS and Windows that can hold files larger than 4 GB.
OpenVMS
MVS
Prior to the introduction of VSAM, OS/360 systems implemented a hybrid file system. The system was designed to easily support removable disk packs, so the information relating to all files on one disk (volume in IBM terminology) is stored on that disk in a flat system file called the Volume Table of Contents (VTOC). The VTOC stores all metadata for the file. Later a hierarchical directory structure was imposed with the introduction of the System Catalog, which can optionally catalog files (datasets) on resident and removable volumes. The catalog only contains information to relate a dataset to a specific volume. If the user requests access to a dataset on an offline volume, and they have suitable privileges, the system will attempt to mount the required volume. Cataloged and non-cataloged datasets can still be accessed using information in the VTOC, bypassing the catalog, if the required volume id is provided to the OPEN request. Still later the VTOC was indexed to speed up access.
Conversational Monitor System
The IBM Conversational Monitor System (CMS) component of VM/370 uses a separate flat file system for each virtual disk (minidisk). File data and control information are scattered and intermixed. The anchor is a record called the Master File Directory (MFD), always located in the fourth block on the disk. Originally CMS used fixed-length 800-byte blocks, but later versions used larger size blocks up to 4K. Access to a data record requires two levels of indirection, where the file's directory entry (called a File Status Table (FST) entry) points to blocks containing a list of addresses of the individual records.
AS/400 file system
Data on the AS/400 and its successors consists of system objects mapped into the system virtual address space in a single-level store. Many types of objects are defined including the directories and files found in other file systems. File objects, along with other types of objects, form the basis of the AS/400's support for an integrated relational database.
Other file systems
The Prospero File System is a file system based on the Virtual System Model. The system was created by B. Clifford Neuman of the Information Sciences Institute at the University of Southern California.
RSRE FLEX file system - written in ALGOL 68
The file system of the Michigan Terminal System (MTS) is interesting because: (i) it provides "line files" where record lengths and line numbers are associated as metadata with each record in the file, lines can be added, replaced, updated with the same or different length records, and deleted anywhere in the file without the need to read and rewrite the entire file; (ii) using program keys files may be shared or permitted to commands and programs in addition to users and groups; and (iii) there is a comprehensive file locking mechanism that protects both the file's data and its metadata.
TempleOS uses RedSea, a file system made by Terry A. Davis.
Limitations
Design limitations
File systems limit storable data capacity generally driven by the typical size of storage devices at the time the file system is designed and anticipated into the foreseeable future.
Since storage sizes have increased at near exponential rate (see Moore's law), newer storage devices often exceed existing file system limits within only a few years after introduction. This requires new file systems with ever increasing capacity.
With higher capacity, the need for capabilities and therefore complexity increases as well. File system complexity typically varies proportionally with available storage capacity. Capacity issues aside, the file systems of early 1980s home computers with 50 KB to 512 KB of storage would not be a reasonable choice for modern storage systems with hundreds of gigabytes of capacity. Likewise, modern file systems would not be a reasonable choice for these early systems, since the complexity of modern file system structures would quickly consume the limited capacity of early storage systems.
Converting the type of a file system
It may be advantageous or necessary to have files in a different file system than they currently exist. Reasons include the need for an increase in the space requirements beyond the limits of the current file system. The depth of path may need to be increased beyond the restrictions of the file system. There may be performance or reliability considerations. Providing access to another operating system which does not support the existing file system is another reason.
In-place conversion
In some cases conversion can be done in-place, although migrating the file system is more conservative, as it involves a creating a copy of the data and is recommended. On Windows, FAT and FAT32 file systems can be converted to NTFS via the convert.exe utility, but not the reverse. On Linux, ext2 can be converted to ext3 (and converted back), and ext3 can be converted to ext4 (but not back), and both ext3 and ext4 can be converted to btrfs, and converted back until the undo information is deleted. These conversions are possible due to using the same format for the file data itself, and relocating the metadata into empty space, in some cases using sparse file support.
Migrating to a different file system
Migration has the disadvantage of requiring additional space although it may be faster. The best case is if there is unused space on media which will contain the final file system.
For example, to migrate a FAT32 file system to an ext2 file system, a new ext2 file system is created. Then the data from the FAT32 file system is copied to the ext2 one, and the old file system is deleted.
An alternative, when there is not sufficient space to retain the original file system until the new one is created, is to use a work area (such as a removable media). This takes longer but has the benefit of producing a backup.
Long file paths and long file names
In hierarchical file systems, files are accessed by means of a path that is a branching list of directories containing the file. Different file systems have different limits on the depth of the path. File systems also have a limit on the length of an individual file name.
Copying files with long names or located in paths of significant depth from one file system to another may cause undesirable results. This depends on how the utility doing the copying handles the discrepancy.
| Technology | Data storage | null |
1010583 | https://en.wikipedia.org/wiki/Crate | Crate | A crate is a large shipping container, often made of wood, typically used to transport or store large, heavy items. Steel and aluminium crates are also used. Specialized crates were designed for specific products, and were often made to be reusable, such as the "bottle crates" for milk and soft drinks.
Crates can be made of wood, plastic, metal or other materials. The term crate often implies a large and strong container. Most plastic crates are smaller and are more commonly called a case or container. Metal is rarely used because of its weight. When metal is used, a crate is often constructed as an open crate and may be termed a cage. Although a crate may be made of any material, for these reasons, the term 'crate' used alone often implies one constructed of wood.
Wooden crates
A wooden crate has a self-supporting structure, with or without sheathing. For a wooden container to be a crate, all six of its sides must be put in place to result in the rated strength of the container. Crates are distinct from wooden boxes. The strength of a wooden box is rated based on the weight it can carry before the top (top, ends, and sides) is installed, whereas the strength of a crate is rated with the top in place. In general conversation, the term crate is sometimes used to denote a wooden box.
History
Crates had been used for many years without a clear origin in documented history. Modern crates from the early 20th century demonstrate a very evolved technology already considering practical and economic considerations built into crate designs. Moving heavy products such as enamelled cast iron sinks, bath tubs, and lavatories was often done without any packaging prior to 1910, which lead to nearly 20% losses due to chipping of the enamel in shipping. Some manufacturers assumed that protecting the product in rugged crating would reduce their losses, however, they found that railroad and shipping workers would handle the crates much more roughly when in a heavy crate, and losses actually increased. The technological solution was to pack enameled bath ware into open crates, which allowed the shipment to be lighter and cheaper, the handlers to use more precautions knowing what merchandise was being shipped, and allowed the customer to inspect the purchase at arrival before opening it. Another early documented reference to a shipping crate in the United States is in a 1930 handbook, Technical Bulletin No. 171 written by C. A. Plaskett for the U.S. Department of Agriculture. Plaskett was known for his extensive testing and defining of various components of transport packaging.
The USDA Forest Service revised and expanded it in 1964 as the "Wood Crate Design Manual", Handbook 252.
Construction
Although the definition of a wooden crate, as compared to a wooden box, is clear, construction of the two often results in a container that is not clearly a crate or a box. Both wooden crates and wooden boxes are constructed to contain unique items, the design of either a crate or box may use principles from both. In this case, the container will typically be defined by how the edges and corners of the container are constructed. If the sheathing (either plywood or lumber) can be removed, and a framed structure will remain standing, the container would likely be termed a crate. If removal of the sheathing results in no way of fastening the lumber around the edges of the container, the container would likely be termed a wooden box.
Design
There are many variations of wooden crate designs. By far the most common are 'closed', 'open' and 'framed'. A Closed Crate is one that is completely or nearly completely enclosed with material such as plywood or lumber boards. When lumber is used, gaps are often left between the boards to allow for expansion. An Open Crate is one that (typically) uses lumber for sheathing. The sheathing is typically gapped by at various distances. There is no strict definition of an open crate as compared to a closed crate. Typically when the gap between boards is greater than the distance required for expansion, the crate would be considered an open crate. The gap between boards would typically not be greater than the width of the sheathing boards. When the gap is larger, the boards are often considered 'cleats' rather than sheathing thus rendering the crate unsheathed. An unsheathed crate is a frame crate. A Frame Crate is one that only contains a skeletal structure and no material is added for surface or pilferage protection. Typically an open crate will be constructed of 12 pieces of lumber, each along an outer edge of the content and more lumber placed diagonally to avoid distortion from torque.
When any type of crate reaches a certain size, more boards may be added. These boards are often called cleats. A cleat is used to provide support to a panel when that panel has reached a size that may require added support based on the method of transportation. Cleats may be placed anywhere between the edges of a given panel. On crates, cleat placement is often determined by the width of the plywood used on plywood sheathed crates. On other crates, cleats are often evenly spaced as required to strengthen the panel. Sometimes two cleats are added across the top panel of a crate placed as needed to give the top of the crate added strength where lifting chains or straps may press on the crate while lifting. When the dimensions of a crate side necessitate more than one piece of plywood be used in that crate side's construction, additional boards called 'battens' are used to cover and provide support to the seams between abutting pieces of plywood. Battens are typically wider than cleats, but do not need to be so.
Cleats may have more specific names based on added benefit they provide. Some published standards only use those more descriptive terms and may never refer to these various lumber components as cleats. For example, lumber placed under the top of a wood container to add support for a large top are called "joists". Lumber is built into the midsection of the top of a wood container to strengthen the top are called "cleats". When the cleats are enlarged and constructed to support a large top, they may generically be termed "cleats" or more specifically be termed "joists".
"Skids" or thick bottom runners, are sometimes specified to allow forklift trucks access for lifting.
Transportation methods and storage conditions must always be considered when designing a crate. Every step of the transportation chain will result in different stresses from shock and vibration. Differences in pressure, temperature and humidity may not only adversely affect the content of the crate, but also will have an effect on the holding strength of the fasteners (mostly the nails and staples) in the crate. In some countries, any wooden crate being designed to ship overseas must be treated to ISPM 15 standards or commonly known as the “bug stamp” to prevent the spread of disease and insects.
Although the above definition almost always stands true, there are many slightly altered or 'sub-definitions' used by and in various organizations, agencies and documents. This is the result of the small size of the industry and the fact that a single, finite definition of an item that is different every time it is made can be difficult to define.
IATA, the International Air Transport Association, for example, doesn't allow crates on airplanes because it defines a crate as an open transport container. Although a crate can be of the Open or Framed variety, having no sheathing, a Closed crate is not open and is just as safe to ship in as a wooden box, which is allowed by IATA.
Other crates
Milk crates and bottle crates are a form of reusable packaging used to ship to retail stores and to return empty bottles to the bottler or packager. These are usually moulded plastic designs expected to make several round trip shipments. Wood structures are also used.
| Technology | Containers | null |
1010773 | https://en.wikipedia.org/wiki/Heiligenschein | Heiligenschein | (; ) is an optical phenomenon in which a bright spot appears around the shadow of the viewer's head in the presence of dew. In photogrammetry and remote sensing, it is more commonly known as the hotspot. It is also occasionally known as Cellini's halo after the Italian artist and writer Benvenuto Cellini (15001571), who described the phenomenon in his memoirs in 1562.
Nearly spherical dew droplets act as lenses to focus the light onto the surface behind them. When this light scatters or reflects off that surface, the same lens re-focuses that light into the direction from which it came. This configuration is similar to a cat's eye retroreflector. However a cat's eye retroreflector needs a refractive index of around 2, while water has a much smaller refractive index of approximately 1.33. This means that the water droplets focus the light about 20% to 50% of the diameter beyond the rear surface of the droplet. When dew droplets are suspended on trichomes at approximately this distance away from the surface of a plant, the combination of droplet and plant acts as a retroreflector. Any retroreflective surface is brightest around the antisolar point.
Opposition surge by other particles than water and the glory in water vapour are similar effects caused by different mechanisms.
| Physical sciences | Atmospheric optics | Earth science |
1011848 | https://en.wikipedia.org/wiki/Fixed-point%20theorem | Fixed-point theorem | In mathematics, a fixed-point theorem is a result saying that a function F will have at least one fixed point (a point x for which F(x) = x), under some conditions on F that can be stated in general terms.
In mathematical analysis
The Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, the procedure of iterating a function yields a fixed point.
By contrast, the Brouwer fixed-point theorem (1911) is a non-constructive result: it says that any continuous function from the closed unit ball in n-dimensional Euclidean space to itself must have a fixed point, but it doesn't describe how to find the fixed point (see also Sperner's lemma).
For example, the cosine function is continuous in [−1, 1] and maps it into [−1, 1], and thus must have a fixed point. This is clear when examining a sketched graph of the cosine function; the fixed point occurs where the cosine curve y = cos(x) intersects the line y = x. Numerically, the fixed point (known as the Dottie number) is approximately x = 0.73908513321516 (thus x = cos(x) for this value of x).
The Lefschetz fixed-point theorem (and the Nielsen fixed-point theorem) from algebraic topology is notable because it gives, in some sense, a way to count fixed points.
There are a number of generalisations to Banach fixed-point theorem and further; these are applied in PDE theory. See fixed-point theorems in infinite-dimensional spaces.
The collage theorem in fractal compression proves that, for many images, there exists a relatively small description of a function that, when iteratively applied to any starting image, rapidly converges on the desired image.
In algebra and discrete mathematics
The Knaster–Tarski theorem states that any order-preserving function on a complete lattice has a fixed point, and indeed a smallest fixed point. | Mathematics | Functions: General | null |
1012996 | https://en.wikipedia.org/wiki/Helium-4 | Helium-4 | Helium-4 () is a stable isotope of the element helium. It is by far the more abundant of the two naturally occurring isotopes of helium, making up about 99.99986% of the helium on Earth. Its nucleus is identical to an alpha particle, and consists of two protons and two neutrons.
Helium-4 makes up about one quarter of the ordinary matter in the universe by mass, with almost all of the rest being hydrogen. While nuclear fusion in stars also produces helium-4, most of the helium-4 in the Sun and in the universe is thought to have been produced during the Big Bang, known as "primordial helium". However, primordial helium-4 is largely absent from the Earth, having escaped during the high-temperature phase of Earth's formation. On Earth, most naturally occurring helium-4 is produced by the alpha decay of heavy elements in the Earth's crust, after the planet cooled and solidified.
When liquid helium-4 is cooled to below , it becomes a superfluid, with properties very different from those of an ordinary liquid. For example, if superfluid helium-4 is placed in an open vessel, a thin Rollin film will climb the sides of the vessel, causing the liquid to escape. The total spin of the helium-4 nucleus is an integer (zero), making it a boson. The superfluid behavior is a manifestation of Bose–Einstein condensation, which occurs only in collections of bosons.
It is theorized that at 0.2 K and 50 atm, solid helium-4 may be a superglass (an amorphous solid exhibiting superfluidity).
The helium-4 atom
The helium atom is the second simplest atom (hydrogen is the simplest), but the extra electron introduces a third "body", so its wave equation becomes a "three-body problem", which has no analytic solution. However, numerical approximations of the equations of quantum mechanics have given a good estimate of the key atomic properties of , such as its size and ionization energy.
The size of the 4He nucleus has long been known to be in the order of magnitude of 1 fm. In an experiment involving the use of exotic helium atoms where an atomic electron was replaced by a muon, the nucleus size has been estimated to be 1.67824(83) fm.
Stability of the 4He nucleus and electron shell
The nucleus of the helium-4 atom has a type of stability called doubly magic. High-energy electron-scattering experiments show its charge to decrease exponentially from a maximum at a central point, exactly as does the charge density of helium's own electron cloud. This symmetry reflects similar underlying physics: the pair of neutrons and the pair of protons in helium's nucleus obey the same quantum mechanical rules as do helium's pair of electrons (although the nuclear particles are subject to a different nuclear binding potential), so that all these fermions fully occupy 1s orbitals in pairs, none of them possessing orbital angular momentum, and each canceling the other's intrinsic spin. Adding another of any of these particles would require angular momentum, and would release substantially less energy (in fact, no nucleus with five nucleons is stable). This arrangement is thus energetically extremely stable for all these particles, and this stability accounts for many crucial facts regarding helium in nature.
For example, the stability and low energy of the electron cloud of helium causes helium's chemical inertness (the most extreme of all the elements), and also the lack of interaction of helium atoms with each other (producing the lowest melting and boiling points of all the elements).
In a similar way, the particular energetic stability of the helium-4 nucleus, produced by similar effects, accounts for the ease of helium-4 production in atomic reactions involving both heavy-particle emission and fusion. Some stable helium-3 is produced in fusion reactions from hydrogen, but it is a very small fraction, compared with the highly energetically favorable production of helium-4. The stability of helium-4 is the reason that hydrogen is converted to helium-4, and not deuterium (hydrogen-2) or helium-3 or other heavier elements during fusion reactions in the Sun. It is also partly responsible for the alpha particle being by far the most common type of baryonic particle to be ejected from an atomic nucleus; in other words, alpha decay is far more common than cluster decay.
The unusual stability of the helium-4 nucleus is also important cosmologically. It explains the fact that, in the first few minutes after the Big Bang, as the "soup" of free protons and neutrons which had initially been created in about a 6:1 ratio cooled to the point where nuclear binding was possible, almost all atomic nuclei to form were helium-4 nuclei. The binding of the nucleons in helium-4 is so tight that its production consumed nearly all the free neutrons in a few minutes, before they could beta decay, and left very few to form heavier atoms (especially lithium, beryllium, and boron). The energy of helium-4 nuclear binding per nucleon is stronger than in any of those elements (see nucleogenesis and binding energy), and thus no energetic "drive" was available to make elements 3, 4, and 5 once helium had been formed. It is barely energetically favorable for helium to fuse into the next element with a higher energy per nucleon (carbon). However, due to the rarity of intermediate elements, and extreme instability of beryllium-8 (the product when two 4He nuclei fuse), this process needs three helium nuclei striking each other nearly simultaneously (see triple-alpha process). There was thus no time for significant carbon to be formed in the few minutes after the Big Bang, before the early expanding universe cooled to the temperature and pressure where helium fusion to carbon was no longer possible. This left the early universe with a very similar hydrogen–helium ratio as is observed today (3 parts hydrogen to 1 part helium-4 by mass), with nearly all the neutrons in the universe trapped in helium-4.
All heavier elements—including those necessary for rocky planets like the Earth, and for carbon-based or other life—thus had to be produced, since the Big Bang, in stars which were hot enough to fuse elements heavier than hydrogen. All elements other than hydrogen and helium today account for only 2% of the mass of atomic matter in the universe. Helium-4, by contrast, makes up about 23% of the universe's ordinary matter—nearly all the ordinary matter that is not hydrogen (1H).
| Physical sciences | s-Block | Chemistry |
1014366 | https://en.wikipedia.org/wiki/Potassium%20iodide | Potassium iodide | Potassium iodide is a chemical compound, medication, and dietary supplement. It is a medication used for treating hyperthyroidism, in radiation emergencies, and for protecting the thyroid gland when certain types of radiopharmaceuticals are used. It is also used for treating skin sporotrichosis and phycomycosis. It is a supplement used by people with low dietary intake of iodine. It is administered orally.
Common side effects include vomiting, diarrhea, abdominal pain, rash, and swelling of the salivary glands. Other side effects include allergic reactions, headache, goitre, and depression. While use during pregnancy may harm the baby, its use is still recommended in radiation emergencies. Potassium iodide has the chemical formula KI. Commercially it is made by mixing potassium hydroxide with iodine.
Potassium iodide has been used medically since at least 1820. It is on the World Health Organization's List of Essential Medicines. Potassium iodide is available as a generic medication and over the counter. Potassium iodide is also used for the iodization of salt.
Medical uses
Dietary supplement
Potassium iodide is a nutritional supplement in animal feeds and also in the human diet. In humans it is the most common additive used for iodizing table salt (a public health measure to prevent iodine deficiency in populations that get little seafood). The oxidation of iodide causes slow loss of iodine content from iodised salts that are exposed to excess air. The alkali metal iodide salt, over time and exposure to excess oxygen and carbon dioxide, slowly oxidizes to metal carbonate and elemental iodine, which then evaporates. Potassium iodate (KIO3) is used to iodize some salts so that the iodine is not lost by oxidation. Dextrose or sodium thiosulfate are often added to iodized table salt to stabilize potassium iodide thus reducing loss of the volatile chemical.
Thyroid protection in nuclear accidents
Thyroid iodine uptake blockade with potassium iodide is used in nuclear medicine scintigraphy and therapy with some radioiodinated compounds that are not targeted to the thyroid, such as iobenguane (MIBG), which is used to image or treat neural tissue tumors, or iodinated fibrinogen, which is used in fibrinogen scans to investigate clotting. These compounds contain iodine, but not in the iodide form. Since they may be ultimately metabolized or break down to radioactive iodide, it is common to administer non-radioactive potassium iodide to ensure that iodide from these radiopharmaceuticals is not sequestered by the normal affinity of the thyroid for iodide.
The World Health Organization (WHO) provides guidelines for potassium iodide use following a nuclear accident. The dosage of potassium iodide is age-dependent: neonates (<1 month) require 16 mg/day; children aged 1 month to 3 years need 32 mg/day; those aged 3-12 years need 65 mg/day; and individuals over 12 years and adults require 130 mg/day. These dosages list mass of potassium iodide rather than elemental iodine. Potassium iodide can be administered as tablets or as Lugol's iodine solution. The same dosage is recommended by the US Food and Drug Administration. A single daily dose is typically sufficient for 24-hour protection. However, in cases of prolonged or repeated exposure, health authorities may recommend multiple daily doses. Priority for prophylaxis is given to the most sensitive groups: pregnant and breastfeeding women, infants, and children under 18 years. The recommended doses of potassium iodide, which contains a stable isotope of iodine, only protect the thyroid gland from radioactive iodine. It does not offer protection against other radioactive substances. Some sources recommend alternative dosing regimens.
Not all sources are in agreement on the necessary duration of thyroid blockade, although agreement appears to have been reached about the necessity of blockade for both scintigraphic and therapeutic applications of iobenguane. Commercially available iobenguane is labeled with iodine-123, and product labeling recommends administration of potassium iodide 1 hour prior to administration of the radiopharmaceutical for all age groups, while the European Association of Nuclear Medicine recommends (for iobenguane labeled with either isotope), that potassium iodide administration begin one day prior to radiopharmaceutical administration, and continue until the day following the injection, with the exception of new-borns, who do not require potassium iodide doses following radiopharmaceutical injection.
Product labeling for diagnostic iodine-131 iobenguane recommends potassium iodide administration one day before injection and continuing 5 to 7 days following administration, in keeping with the much longer half-life of this isotope and its greater danger to the thyroid. Iodine-131 iobenguane used for therapeutic purposes requires a different pre-medication duration, beginning 24–48 hours prior to iobenguane injection and continuing 10–15 days following injection.
In 1982, the U.S. Food and Drug Administration approved potassium iodide to protect thyroid glands from radioactive iodine involving accidents or fission emergencies. In an accidental event or attack on a nuclear power plant, or in nuclear bomb fallout, volatile fission product radionuclides may be released. Of these products, (Iodine-131) is one of the most common and is particularly dangerous to the thyroid gland because it may lead to thyroid cancer. By saturating the body with a source of stable iodide prior to exposure, inhaled or ingested tends to be excreted, which prevents radioiodine uptake by the thyroid. According to one 2000 study "KI administered up to 48 h before exposure can almost completely block thyroid uptake and therefore greatly reduce the thyroid absorbed dose. However, KI administration 96 h or more before exposure has no significant protective effect. In contrast, KI administration after exposure to radioiodine induces a smaller and rapidly decreasing blockade effect." According to the FDA, KI should not be taken as a preventative before radiation exposure. Since KI protects for approximately 24 hours, it must be dosed daily until a risk of significant exposure to radioiodine no longer exists.
Emergency 130 milligrams potassium iodide doses provide 100 mg iodide (the other 30 mg is the potassium in the compound), which is roughly 700 times larger than the normal nutritional need (see recommended dietary allowance) for iodine, which is 150 micrograms (0.15 mg) of iodine (as iodide) per day for an adult. A typical tablet weighs 160 mg, with 130 mg of potassium iodide and 30 mg of excipients, such as binding agents.
Potassium iodide cannot protect against any other mechanisms of radiation poisoning, nor can it provide any degree of protection against dirty bombs that produce radionuclides other than those of iodine.
The potassium iodide in iodized salt is insufficient for this use. A likely lethal dose of salt (more than a kilogram) would be needed to equal the potassium iodide in one tablet.
The World Health Organization does not recommend KI prophylaxis for adults over 40 years, unless the radiation dose from inhaled radioiodine is expected to threaten thyroid function, because the KI side effects increase with age and may exceed the KI protective effects; "...unless doses to the thyroid from inhalation rise to levels threatening thyroid function, that is of the order of about 5 Gy. Such radiation doses will not occur far away from an accident site."
The U.S. Department of Health and Human Services restated these two years later as "The downward KI (potassium iodide) dose adjustment by age group, based on body size considerations, adheres to the principle of minimum effective dose. The recommended standard (daily) dose of KI for all school-age children is the same (65 mg). However, adolescents approaching adult size (i.e., >70 kg [154 lbs]) should receive the full adult dose (130 mg) for maximal block of thyroid radioiodine uptake. Neonates ideally should receive the lowest dose (16 mg) of KI."
Side effects
There is reason for caution with prescribing the ingestion of high doses of potassium iodide and iodate, because their unnecessary use can cause conditions such as the Jod-Basedow phenomena, trigger and/or worsen hyperthyroidism and hypothyroidism, and then cause temporary or even permanent thyroid conditions. It can also cause sialadenitis (an inflammation of the salivary gland), gastrointestinal disturbances, and rashes. Potassium iodide is also not recommended for people with dermatitis herpetiformis and hypocomplementemic vasculitis – conditions that are linked to a risk of iodine sensitivity.
There have been some reports of potassium iodide treatment causing swelling of the parotid gland (one of the three glands that secrete saliva), due to its stimulatory effects on saliva production.
A saturated solution of KI (SSKI) is typically given orally in adult doses several times a day (5 drops of SSKI assumed to be mL) for thyroid blockade (to prevent the thyroid from excreting thyroid hormone) and occasionally this dose is also used, when iodide is used as an expectorant (the total dose is about one gram KI per day for an adult). The anti-radioiodine doses used for uptake blockade are lower, and range downward from 100 mg a day for an adult, to less than this for children (see table). All of these doses should be compared with the far lower dose of iodine needed in normal nutrition, which is only 150 μg per day (150 micrograms, not milligrams).
At maximal doses, and sometimes at much lower doses, side effects of iodide used for medical reasons, in doses of 1000 times the normal nutritional need, may include: acne, loss of appetite, or upset stomach (especially during the first several days, as the body adjusts to the medication). More severe side effects that require notification of a physician are: fever, weakness, unusual tiredness, swelling in the neck or throat, mouth sores, skin rash, nausea, vomiting, stomach pains, irregular heartbeat, numbness or tingling of the hands or feet, or a metallic taste in the mouth.
In the event of a radioiodine release the ingestion of prophylaxis potassium iodide, if available, or even iodate, would rightly take precedence over perchlorate administration, and would be the first line of defence in protecting the population from a radioiodine release. However, in the event of a radioiodine release too massive and widespread to be controlled by the limited stock of iodide and iodate prophylaxis drugs, then the addition of perchlorate ions to the water supply, or distribution of perchlorate tablets would serve as a cheap, efficacious, second line of defense against carcinogenic radioiodine bioaccumulation.
The ingestion of goitrogen drugs is, much like potassium iodide also not without its dangers, such as hypothyroidism. In all these cases however, despite the risks, the prophylaxis benefits of intervention with iodide, iodate or perchlorate outweigh the serious cancer risk from radioiodine bioaccumulation in regions where radioiodine has sufficiently contaminated the environment.
Industrial uses
KI is used with silver nitrate to make silver iodide (AgI), an important chemical in film photography. KI is a component in some disinfectants and hair treatment chemicals. KI is also used as a fluorescence quenching agent in biomedical research, an application that takes advantage of collisional quenching of fluorescent substances by the iodide ion. However, for several fluorophores addition of KI in μM-mM concentrations results in increase of fluorescence intensity, and iodide acts as fluorescence enhancer.
Potassium iodide is a component in the electrolyte of dye sensitised solar cells (DSSC) along with iodine.
Potassium iodide finds its most important applications in organic synthesis mainly in the preparation of aryl iodides in the Sandmeyer reaction, starting from aryl amines. Aryl iodides are in turn used to attach aryl groups to other organics by nucleophilic substitution, with iodide ion as the leaving group.
Chemistry
Potassium iodide is an ionic compound which is made of the following ions: . It crystallises in the sodium chloride structure. It is produced industrially by treating KOH with iodine.
It is a white salt, which is the most commercially significant iodide compound, with approximately 37,000 tons produced in 1985. It absorbs water less readily than sodium iodide, making it easier to work with.
Aged and impure samples are yellow because of the slow oxidation of the salt to potassium carbonate and elemental iodine.
Inorganic chemistry
Since the iodide ion is a mild reducing agent, is easily oxidised to iodine () by powerful oxidising agents such as chlorine:
This reaction is employed in the isolation of iodine from natural sources. Air will oxidize iodide, as evidenced by the observation of a purple extract when aged samples of KI are rinsed with dichloromethane. As formed under acidic conditions, hydriodic acid (HI) is a stronger reducing agent.
Like other iodide salts, KI forms triiodide () when combined with elemental iodine.
Unlike , salts can be highly water-soluble. Through this reaction, iodine is used in redox titrations. Aqueous (Lugol's iodine) solution is used as a disinfectant and as an etchant for gold surfaces.
Potassium iodide and silver nitrate are used to make silver(I) iodide, which is used for high speed photographic film and for cloud seeding:
Organic chemistry
KI serves as a source of iodide in organic synthesis. A useful application is in the preparation of aryl iodides from arenediazonium salts.
KI, acting as a source of iodide, may also act as a nucleophilic catalyst for the alkylation of alkyl chlorides, bromides, or mesylates.
History
Potassium iodide has been used medically since at least 1820. Some of the earliest uses included cures for syphilis, lead and mercury poisoning.
Chernobyl
Potassium iodide's (KI) value as a radiation protective (thyroid blocking) agent was demonstrated following the Chernobyl nuclear reactor disaster in April 1986. A saturated solution of potassium iodide (SSKI) was administered to 10.5 million children and 7 million adults in Poland as a preventative measure against accumulation of radioactive in the thyroid gland.
Reports differ concerning whether people in the areas immediately surrounding Chernobyl itself were given the supplement. However the US Nuclear Regulatory Commission (NRC) reported, "thousands of measurements of I-131 (radioactive iodine) activity...suggest that the observed levels were lower than would have been expected had this prophylactic measure not been taken. The use of KI...was credited with permissible iodine content in 97% of the evacuees tested."
With the passage of time, people living in irradiated areas where KI was not available have developed thyroid cancer at epidemic levels, which is why the US Food and Drug Administration (FDA) reported "The data clearly demonstrate the risks of thyroid radiation... KI can be used [to] provide safe and effective protection against thyroid cancer caused by irradiation."
Chernobyl also demonstrated that the need to protect the thyroid from radiation was greater than expected. Within ten years of the accident, it became clear that thyroid damage caused by released radioactive iodine was virtually the only adverse health effect that could be measured. As reported by the NRC, studies after the accident showed that "As of 1996, except for thyroid cancer, there has been no confirmed increase in the rates of other cancers, including leukemia, among the... public, that have been attributed to releases from the accident."
But equally important to the question of KI is the fact that radioactivity releases are not "local" events. Researchers at the World Health Organization accurately located and counted the residents with cancer from Chernobyl and were startled to find that "the increase in incidence [of thyroid cancer] has been documented up to 500 km from the accident site... significant doses from radioactive iodine can occur hundreds of kilometers from the site, beyond emergency planning zones." Consequently, far more people than anticipated were affected by the radiation, which caused the United Nations to report in 2002 that "The number of people with thyroid cancer... has exceeded expectations. Over 11,000 cases have already been reported."
Hiroshima and Nagasaki
The Chernobyl findings were consistent with studies of the effects of previous radioactivity releases. In 1945, several hundreds of thousands of people working and residing in the Japanese cities of Hiroshima and Nagasaki were exposed to high levels of radiation after atomic bombs were detonated over the two cities by the United States. Survivors of the A-bombings, also known as hibakusha, have markedly high rates of thyroid disease; a 2006 study of 4091 hibakusha found nearly half the participants (1833; 44.8%) had an identifiable thyroid disease.
An editorial in The Journal of the American Medical Association regarding thyroid diseases in both hibakusha and those affected by the Chernobyl disaster reports that "[a] straight line adequately describes the relationship between radiation dose and thyroid cancer incidence" and states "it is remarkable that a biological effect from a single brief environmental exposure nearly 60 years in the past is still present and can be detected."
Nuclear weapons testing
The development of thyroid cancer among residents in the North Pacific from radioactive fallout following the United States' nuclear weapons testing in the 1950s (on islands nearly 200 miles downwind of the tests) were instrumental in the 1978 decision by the FDA to issue a request for the availability of KI for thyroid protection in the event of a release from a commercial nuclear power plant or weapons-related nuclear incident. Noting that KI's effectiveness was "virtually complete" and finding that iodine in the form of KI was substantially superior to other forms including iodate (KIO3) in terms of safety, effectiveness, lack of side effects, and speed of onset, the FDA invited manufacturers to submit applications to produce and market KI.
Fukushima
It was reported on 16 March 2011, that potassium iodide tablets were given preventively to U.S. Naval air crew members flying within 70 nautical miles of the Fukushima Daiichi Nuclear Power Plant damaged in the earthquake (8.9/9.0 magnitude) and ensuing tsunami on 11 March 2011. The measures were seen as precautions, and the Pentagon said no U.S. forces have shown signs of radiation poisoning. By 20 March, the US Navy instructed personnel coming within 100 miles of the reactor to take the pills.
The Netherlands
In the Netherlands, the central storage of iodine-pills is located in Zoetermeer, near The Hague. In 2017, the Dutch government distributed pills to hundreds of thousands of residents who lived within a certain distance of nuclear power plants and met some other criteria.
Belgium
By 2020, potassium iodide tablets are made available free of charge for all residents in all pharmacies throughout the country.
Formulations
Three companies (Anbex, Inc., Fleming Co, and Recipharm of Sweden) have met the strict FDA requirements for manufacturing and testing of KI, and they offer products (IOSAT, ThyroShield, and ThyroSafe, respectively) which are available for purchase. In 2012, Fleming Co. sold all its product rights and manufacturing facility to other companies and no longer exists. ThyroShield is currently not in production.
Tablets of potassium iodide are supplied for emergency purposes related to blockade of radioiodine uptake, a common form of radiation poisoning due to environmental contamination by the short-lived fission product . Potassium iodide may also be administered pharmaceutically for thyroid storm.
For reasons noted above, therapeutic drops of SSKI, or 130 mg tablets of KI as used for nuclear fission accidents, are not used as nutritional supplements, since an SSKI drop or nuclear-emergency tablet provides 300 to 700 times more iodine than the daily adult nutritional requirement. Dedicated nutritional iodide tablets containing 0.15 mg (150 micrograms (μg)) of iodide, from KI or from various other sources (such as kelp extract) are marketed as supplements, but they are not to be confused with the much higher pharmaceutical dose preparations.
Potassium iodide can be conveniently prepared in a saturated solution, abbreviated SSKI. This method of delivering potassium iodide doesn't require a method to weigh out the potassium iodide, thus allowing it to be used in an emergency situation. KI crystals are simply added to water until no more KI will dissolve and instead sits at the bottom of the container. With pure water, the concentration of KI in the solution depends only on the temperature. Potassium iodide is highly soluble in water thus SSKI is a concentrated source of KI. At 20 degrees Celsius the solubility of KI is 140-148 grams per 100 grams of water. Because the volumes of KI and water are approximately additive, the resulting SSKI solution will contain about 1.00 gram (1000 mg) KI per milliliter (mL) of solution. This is 100% weight/volume (note units of mass concentration) of KI (one gram KI per mL solution), which is possible because SSKI is significantly more dense than pure water—about 1.67 g/mL. Because KI is about 76.4% iodide by weight, SSKI contains about 764 mg iodide per mL. This concentration of iodide allows the calculation of the iodide dose per drop, if one knows the number of drops per milliliter. For SSKI, a solution more viscous than water, there are assumed to be 15 drops per mL; the iodide dose is therefore approximately 51 mg per drop. It is conventionally rounded to 50 mg per drop.
The term SSKI is also used, especially by pharmacists, to refer to a U.S.P. pre-prepared solution formula, made by adding KI to water to prepare a solution containing 1000 mg KI per mL solution (100% wt/volume KI solution), to closely approximate the concentration of SSKI made by saturation. This is essentially interchangeable with SSKI made by saturation, and also contains about 50 mg iodide per drop.
Saturated solutions of potassium iodide can be an emergency treatment for hyperthyroidism (so-called thyroid storm), as high amounts of iodide temporarily suppress secretion of thyroxine from the thyroid gland. The dose typically begins with a loading dose, then mL SSKI (5 drops or 250 mg iodine as iodide), three times per day.
Iodide solutions made from a few drops of SSKI added to drinks have also been used as expectorants to increase the water content of respiratory secretions and encourage effective coughing.
SSKI has been proposed as a topical treatment for sporotrichosis, but no trials have been conducted to determine the efficacy or side effects of such treatment.
Potassium iodide has been used for symptomatic treatment of erythema nodosum patients for persistent lesions whose cause remains unknown. It has been used in cases of erythema nodosum associated with Crohn's disease.
Due to its high potassium content, SSKI is extremely bitter, and if possible it is administered in a sugar cube or small ball of bread. It may also be mixed into much larger volumes of juices.
Neither SSKI or KI tablets form nutritional supplements, since the nutritional requirement for iodine is only 150 micrograms (0.15 mg) of iodide per day. Thus, a drop of SSKI provides 50/0.15 = 333 times the daily iodine requirement, and a standard KI tablet provides twice this much.
| Physical sciences | Halide salts | Chemistry |
5579717 | https://en.wikipedia.org/wiki/Bird%20anatomy | Bird anatomy | Bird anatomy, or the physiological structure of birds' bodies, shows many unique adaptations, mostly aiding flight. Birds have a light skeletal system and light but powerful musculature which, along with circulatory and respiratory systems capable of very high metabolic rates and oxygen supply, permit the bird to fly. The development of a beak has led to evolution of a specially adapted digestive system.
Skeletal system
Birds have many bones that are hollow (pneumatized) with criss-crossing struts or trusses for structural strength. The number of hollow bones varies among species, though large gliding and soaring birds tend to have the most. Respiratory air sacs often form air pockets within the semi-hollow bones of the bird's skeleton. The bones of diving birds are often less hollow than those of non-diving species. Penguins, loons, and puffins are without pneumatized bones entirely. Flightless birds, such as ostriches and emus, have pneumatized femurs and, in the case of the emu, pneumatized cervical vertebrae.
Axial skeleton
The bird skeleton is highly adapted for flight. It is extremely lightweight but strong enough to withstand the stresses of taking off, flying, and landing. One key adaptation is the fusing of bones into single ossifications, such as the pygostyle. Because of this, birds usually have a smaller number of bones than other terrestrial vertebrates. Birds also lack teeth or even a true jaw and instead have a beak, which is far more lightweight. The beaks of many baby birds have a projection called an egg tooth, which facilitates their exit from the amniotic egg. It falls off once the egg has been penetrated.
Vertebral column
The vertebral column is divided into five sections of vertebrae:
Cervical vertebrae
The cervical vertebrae provide structural support to the neck and number between 8 and as many as 25 vertebrae in certain swan species (Cygninae) and other long-necked birds. All cervical vertebrae have transverse processes attached except the first one. This vertebra (C1) is called the atlas which articulates with the occipital condyles of the skull and lacks the foramen typical of most vertebrae. The neck of a bird is composed of many cervical vertebrae enabling birds to have increased flexibility. A flexible neck allows many birds with immobile eyes to move their head more productively and center their sight on objects that are close or far in distance. Most birds have about three times as many neck vertebrae as humans, which allows for increased stability during fast movements such as flying, landing, and taking-off. The neck plays a role in head-bobbing which is present in at least eight out of 44 orders of birds, including Columbiformes, Galliformes, and Gruiformes. Head-bobbing is an optokinetic response which stabilizes a bird's surroundings as it alternates between a thrust phase and a hold phase. Head-bobbing is synchronous with the feet as the head moves in accordance with the rest of the body. Data from various studies suggest that the main reason for head-bobbing in some birds is for the stabilization of their surroundings, although it is uncertain why some but not all bird orders show head-bob.
Thoracic vertebrae
The thoracic vertebrae number between five and ten, and the first thoracic vertebra is distinguishable due to the fusion of its attached rib to the sternum while the ribs of cervical vertebrae are free. Anterior thoracic vertebrae are fused in many birds and articulate with the notarium of the pectoral girdle.
Synsacrum
The synsacrum consists of one thoracic, six lumbar, two sacral, and five sacro-caudal vertebrae fused into one ossified structure that then fuse with the ilium. When not in flight, this structure provides the main support for the rest of the body. Similar to the sacrum of mammals, the synsacrum lacks the distinct disc shape of cervical and thoracic vertebrae.
Caudal vertebrae
The free vertebrae immediately following the fused sacro-caudal vertebrae of the synsacrum are known as the caudal vertebrae. Birds have between five and eight free caudal vertebrae. The caudal vertebrae provide structure to the tails of vertebrates and are homologous to the coccyx found in mammals lacking tails.
Pygostyle
In birds, the last five to six caudal vertebrae are fused to form the pygostyle. Some sources note that up to ten caudal vertebrae may make up this fused structure. This structure provides an attachment point for tail feathers that aid in control of flight.
Scapular girdle
Birds are the only living vertebrates to have fused collarbones and a keeled breastbone. The keeled sternum serves as an attachment site for the muscles used in flying or swimming. Flightless birds, such as ostriches, lack a keeled sternum and have denser and heavier bones compared to birds that fly. Swimming birds have a wide sternum, walking birds have a long sternum, and flying birds have a sternum that is nearly equal in width and height. The chest consists of the furcula (wishbone) and coracoid (collar bone) which, together with the scapula, form the pectoral girdle; the side of the chest is formed by the ribs, which meet at the sternum (mid-line of the chest).
Ribs
Birds have uncinate processes on the ribs. These are hooked extensions of bone which help to strengthen the rib cage by overlapping with the rib behind them. This feature is also found in the tuatara (Sphenodon).
Skull
The skull consists of five major bones: the frontal (top of head), parietal (back of head), premaxillary and nasal (top beak), and the mandible (bottom beak). The skull of a normal bird usually weighs about 1% of the bird's total body weight. The eye occupies a considerable amount of the skull and is surrounded by a sclerotic eye-ring, a ring of tiny bones. This characteristic is also seen in their reptile cousins.
Broadly speaking, avian skulls consist of many small, non-overlapping bones. Pedomorphosis, maintenance of the ancestral state in adults, is thought to have facilitated the evolution of the avian skull. In essence, adult bird skulls will resemble the juvenile form of their theropod dinosaur ancestors. As the avian lineage has progressed and as pedomorphosis has occurred, they have lost the postorbital bone behind the eye, the ectopterygoid at the back of the palate, and teeth. The palate structures have also become greatly altered with changes, mostly reductions, seen in the ptyergoid, palatine, and jugal bones. A reduction in the adductor chambers has also occurred. These are all conditions seen in the juvenile form of their ancestors. The premaxillary bone has also hypertrophied to form the beak while the maxilla has become diminished, as suggested by both developmental and paleontological studies. This expansion into the beak has occurred in tandem with the loss of a functional hand and the developmental of a point at the front of the beak that resembles a "finger". The premaxilla is also known to play a large role in feeding behaviours in fish.
The structure of the avian skull has important implications for their feeding behaviours. Birds show independent movement of the skull bones known as cranial kinesis. Cranial kinesis in birds occurs in several forms, but all of the different varieties are all made possible by the anatomy of the skull. Animals with large, overlapping bones (including the ancestors of modern birds) have akinetic (non-kinetic) skulls. For this reason it has been argued that the pedomorphic bird beak can be seen as an evolutionary innovation.
Birds have a diapsid skull, as in reptiles, with a pre-lachrymal fossa (present in some reptiles). The skull has a single occipital condyle.
Appendicular skeleton
The shoulder consists of the scapula (shoulder blade), coracoid, and humerus (upper arm). The humerus joins the radius and ulna (forearm) to form the elbow. The carpus and metacarpus form the "wrist" and "hand" of the bird, and the digits are fused together. The bones in the wing are extremely light so that the bird can fly more easily.
The hips consist of the pelvis, which includes three major bones: the ilium (top of the hip), ischium (sides of hip), and pubis (front of the hip). These are fused into one (the innominate bone). Innominate bones are evolutionary significant in that they allow birds to lay eggs. They meet at the acetabulum (hip socket) and articulate with the femur, which is the first bone of the hind limb.
The upper leg consists of the femur. At the knee joint, the femur connects to the tibiotarsus (shin) and fibula (side of lower leg). The tarsometatarsus forms the upper part of the foot, digits make up the toes. The leg bones of birds are the heaviest, contributing to a low center of gravity, which aids in flight. A bird's skeleton accounts for only about 5% of its total body weight.
They have a greatly elongate tetradiate pelvis, similar to some reptiles. The hind limb has an intra-tarsal joint found also in some reptiles. There is extensive fusion of the trunk vertebrae as well as fusion with the pectoral girdle.
Wings
Feet
Birds' feet are classified as anisodactyl, zygodactyl, heterodactyl, syndactyl or pamprodactyl. Anisodactyl is the most common arrangement of digits in birds, with three toes forward and one back. This is common in songbirds and other perching birds, as well as hunting birds like eagles, hawks, and falcons.
Syndactyly, as it occurs in birds, is like anisodactyly, except that the second and third toes (the inner and middle forward-pointing toes), or three toes, are fused together, as in the belted kingfisher Ceryle alcyon. This is characteristic of Coraciiformes (kingfishers, bee-eaters, rollers, etc.).
Zygodactyl (from Greek , a yoke) feet have two toes facing forward (digits two and three) and two back (digits one and four). This arrangement is most common in arboreal species, particularly those that climb tree trunks or clamber through foliage. Zygodactyly occurs in the parrots, woodpeckers (including flickers), cuckoos (including roadrunners), and some owls. Zygodactyl tracks have been found dating to 120–110 Ma (early Cretaceous), 50 million years before the first identified zygodactyl fossils.
Heterodactyly is like zygodactyly, except that digits three and four point forward and digits one and two point back. This is found only in trogons, while pamprodactyl is an arrangement in which all four toes may point forward, or birds may rotate the outer two toes backward. It is a characteristic of swifts (Apodidae).
Evolution
Hind limb change
A significant similarity in the structure of the hind limbs of birds and other dinosaurs is associated with their ability to walk on two legs, or bipedalism. In the 20th century, the prevailing opinion was that the transition to bipedalism occurred due to the transformation of the forelimbs into wings. Modern scientists believe that, on the contrary, it was a necessary condition for the occurrence of flight.
The transition to the use of only the hind limbs for movement was accompanied by an increase in the rigidity of the lumbar and sacral regions. The pubic bones of birds and some other bipedal dinosaurs are turned backward. Scientists associate this with a shift in the center of gravity of the body backward. The reason for this shift is called the transition to bipedality or the development of powerful forelimbs, as in Archaeopteryx. The large and heavy tail of two-legged dinosaurs may have been an additional support. Partial tail reduction and subsequent formation of pygostyle occurred due to the backward deviation of the first toe of the hind limb; in dinosaurs with a long rigid tail, the development of the foot proceeded differently. This process, apparently, took place in parallel in birds and some other dinosaurs. In general, the anisodactyl foot, which also has a better grasping ability and allows confident movement both on the ground and along branches, is ancestral for birds. Against this background, pterosaurs stand out, which, in the process of unsuccessful evolutionary changes, could not fully move on two legs, but instead developed a physical means of flight that was fundamentally different from birds.
Forelimb changes
Changes in the hindlimbs did not affect the location of the forelimbs, which in birds remained laterally spaced, and in non-avian dinosaurs they switched to a parasagittal orientation. At the same time, the forelimbs, freed from the support function, had ample opportunities for evolutionary changes. Proponents of the running hypothesis believe that flight was formed through fast running, bouncing, and then gliding. The forelimbs could be used for grasping after a jump or as "insect trapping nets", animals could wave them, helping themselves during the jump. According to the arboreal hypothesis, the ancestors of birds climbed trees with the help of their forelimbs, and from there they planned, after which they proceeded to flight.
Muscular system
Most birds have approximately 175 different muscles, mainly controlling the wings, skin, and legs. Overall, the muscle mass of birds is concentrated ventrally. The largest muscles in the bird are the pectorals, or the pectoralis major, which control the wings and make up about 15–25% of a flighted bird's body weight. They provide the powerful wing stroke essential for flight. The muscle deep to (underneath) the pectorals is the supracoracoideus, or the pectoralis minor. It raises the wing between wingbeats. Both muscle groups attach to the keel of the sternum. This is remarkable, because other vertebrates have the muscles to raise the upper limbs generally attached to areas on the back of the spine. The supracoracoideus and the pectorals together make up about 25–40% of the bird's full body weight. Caudal to the pectorals and supracoracoideus are the internal and external obliques which compress the abdomen. Additionally, there are other abdominal muscles present that expand and contract the chest, and hold the ribcage. The muscles of the wing, as seen in the labelled images, function mainly in extending or flexing the elbow, moving the wing as a whole or in extending or flexing particular digits. These muscles work to adjust the wings for flight and all other actions. Muscle composition does vary between species and even within families.
Birds have unique necks which are elongated with complex musculature as it must allow for the head to perform functions other animals may utilize pectoral limbs for.
The skin muscles help a bird in its flight by adjusting the feathers, which are attached to the skin muscle and help the bird in its flight maneuvers as well as aiding in mating rituals.
There are only a few muscles in the trunk and the tail, but they are very strong and are essential for the bird. These include the lateralis caudae and the levator caudae which control movement of the tail and the spreading of rectrices, giving the tail a larger surface area which helps keep the bird in the air as well as aiding in turning.
Muscle composition and adaptation differ by theories of muscle adaptation in whether evolution of flight came from flapping or gliding first.
Integumentary system
Scales
The scales of birds are composed of keratin, like beaks, claws, and spurs. They are found mainly on the toes and tarsi (lower leg of birds), usually up to the tibio-tarsal joint, but may be found further up the legs in some birds. In many of the eagles and owls, the legs are feathered down to (but not including) their toes. Most bird scales do not overlap significantly, except in the cases of kingfishers and woodpeckers. The scales and scutes of birds were originally thought to be homologous to those of reptiles; however, more recent research suggests that scales in birds re-evolved after the evolution of feathers.
Bird embryos begin development with smooth skin. On the feet, the corneum, or outermost layer, of this skin may keratinize, thicken and form scales. These scales can be organized into;
Cancella – minute scales which are really just a thickening and hardening of the skin, crisscrossed with shallow grooves.
Scutella – scales that are not quite as large as scutes, such as those found on the caudal, or hind part, of the chicken metatarsus.
Scutes – the largest scales, usually on the anterior surface of the metatarsus and dorsal surface of the toes.
The rows of scutes on the anterior of the metatarsus can be called an "acrometatarsium" or "acrotarsium".
Reticula are located on the lateral and medial surfaces (sides) of the foot and were originally thought to be separate scales. However, histological and evolutionary developmental work in this area revealed that these structures lack beta-keratin (a hallmark of reptilian scales) and are entirely composed of alpha-keratin. This, along with their unique structure, has led to the suggestion that these are actually feather buds that were arrested early in development.
Collectively, the scaly covering present on the foot of the birds is called podotheca.
Herbst corpuscles and lore
The bills of many waders have Herbst corpuscles which help them find prey hidden under wet sand, by detecting minute pressure differences in the water. All extant birds can move the parts of the upper jaw relative to the brain case. However, this is more prominent in some birds and can be readily detected in parrots.
The region between the eye and bill on the side of a bird's head is called the lore. This region is sometimes featherless, and the skin may be tinted, as in many species of the cormorant family.
Beak
The beak, bill, or rostrum is an external anatomical structure of birds which is used for eating and for preening, manipulating objects, killing prey, fighting, probing for food, courtship and feeding young. Although beaks vary significantly in size, shape and color, they share a similar underlying structure. Two bony projections—the upper and lower mandibles—covered with a thin keratinized layer of epidermis known as the rhamphotheca. In most species, two holes known as nares lead to the respiratory system.
Respiratory system
Due to the high metabolic rate required for flight, birds have a high oxygen demand. Their highly effective respiratory system helps them meet that demand.
Although birds have lungs, theirs are fairly rigid structures that do not expand and contract as they do in mammals, reptiles and many amphibians. Instead, the structures that act as the bellows that ventilate the lungs are the air sacs, which are distributed throughout much of the birds' bodies. The air sacs move air unidirectionally through the parabronchi of the rigid lungs.
The primary mechanism of unidirectional flows in bird lungs is flow irreversibility at high Reynolds number manifested in asymmetric junctions and their loop-forming connectivity.
Although avian lungs are smaller than those of mammals of comparable size, the air sacs account for 15% of the total body volume, whereas in mammals, the alveoli, which act as the bellows, constitute only 7% of the total body volume. Overall, avian lungs have a respiratory surface area that is approximately 15% greater, a pulmonary capillary blood volume that is 2.5-3 larger and a blood-gas barrier that is 56-67% thinner than those in the lungs of mammals of a similar body mass. The walls of the air sacs do not have a good blood supply and so do not play a direct role in gas exchange.
Birds lack a diaphragm, and therefore use their intercostal and abdominal muscles to expand and contract their entire thoraco-abdominal cavities, thus rhythmically changing the volumes of all their air sacs in unison (illustration on the right). The active phase of respiration in birds is exhalation, requiring contraction of their muscles of respiration. Relaxation of these muscles causes inhalation.
Three distinct sets of organs perform respiration — the anterior air sacs (interclavicular, cervicals, and anterior thoracics), the lungs, and the posterior air sacs (posterior thoracics and abdominals). Typically there are nine air sacs within the system; however, that number can range between seven and twelve, depending on the species of bird. Passerines possess seven air sacs, as the clavicular air sacs may interconnect or be fused with the anterior thoracic sacs.
During inhalation, environmental air initially enters the bird through the nostrils from where it is heated, humidified, and filtered in the nasal passages and upper parts of the trachea. From there, the air enters the lower trachea and continues to just beyond the syrinx, at which point the trachea branches into two primary bronchi, going to the two lungs. The primary bronchi enter the lungs to become the intrapulmonary bronchi, which give off a set of parallel branches called ventrobronchi and, a little further on, an equivalent set of dorsobronchi. The ends of the intrapulmonary bronchi discharge air into the posterior air sacs at the caudal end of the bird. Each pair of dorso-ventrobronchi is connected by a large number of parallel microscopic air capillaries (or parabronchi) where gas exchange occurs. As the bird inhales, tracheal air flows through the intrapulmonary bronchi into the posterior air sacs, as well as into the dorsobronchi (but not into the ventrobronchi whose openings into the intrapulmonary bronchi were previously believed to be tightly closed during inhalation. However, more recent studies have shown that the aerodynamics of the bronchial architecture directs the inhaled air away from the openings of the ventrobronchi, into the continuation of the intrapulmonary bronchus towards the dorsobronchi and posterior air sacs). From the dorsobronchi the air flows through the parabronchi (and therefore the gas exchanger) to the ventrobronchi from where the air can only escape into the expanding anterior air sacs. So, during inhalation, both the posterior and anterior air sacs expand, the posterior air sacs filling with fresh inhaled air, while the anterior air sacs fill with "spent" (oxygen-poor) air that has just passed through the lungs.
During exhalation, the intrapulmonary bronchi were believed to be tightly constricted between the region where the ventrobronchi branch off and the region where the dorsobronchi branch off. But it is now believed that more intricate aerodynamic features have the same effect. The contracting posterior air sacs can therefore only empty into the dorsobronchi. From there, the fresh air from the posterior air sacs flows through the parabronchi (in the same direction as occurred during inhalation) into ventrobronchi. The air passages connecting the ventrobronchi and anterior air sacs to the intrapulmonary bronchi open up during exhalation, thus allowing oxygen-poor air from these two organs to escape via the trachea to the exterior. Oxygenated air therefore flows constantly (during the entire breathing cycle) in a single direction through the parabronchi.
The blood flow through the bird lung is at right angles to the flow of air through the parabronchi, forming a cross-current flow exchange system (see illustration on the left). The partial pressure of oxygen in the parabronchi declines along their lengths as O2 diffuses into the blood. The blood capillaries leaving the exchanger near the entrance of airflow take up more O2 than do the capillaries leaving near the exit end of the parabronchi. When the contents of all capillaries mix, the final partial pressure of oxygen of the mixed pulmonary venous blood is higher than that of the exhaled air, but is nevertheless less than half that of the inhaled air, thus achieving roughly the same systemic arterial blood partial pressure of oxygen as mammals do with their bellows-type lungs.
The trachea is an area of dead space: the oxygen-poor air it contains at the end of exhalation is the first air to re-enter the posterior air sacs and lungs. In comparison to the mammalian respiratory tract, the dead space volume in a bird is, on average, 4.5 times greater than it is in mammals of the same size. Birds with long necks will inevitably have long tracheae, and must therefore take deeper breaths than mammals do to make allowances for their greater dead space volumes. In some birds (e.g. the whooper swan, Cygnus cygnus, the white spoonbill, Platalea leucorodia, the whooping crane, Grus americana, and the helmeted curassow, Pauxi pauxi) the trachea, which some cranes can be 1.5 m long, is coiled back and forth within the body, drastically increasing the dead space ventilation. The purpose of this extraordinary feature is unknown.
Air passes unidirectionally through the lungs during both exhalation and inspiration, causing, except for the oxygen-poor dead space air left in the trachea after exhalation and breathed in at the beginning of inhalation, little to no mixing of new oxygen-rich air with spent oxygen-poor air (as occurs in mammalian lungs), changing only (from oxygen-rich to oxygen-poor) as it moves (unidirectionally) through the parabronchi.
Avian lungs do not have alveoli as mammalian lungs do. Instead they contain millions of narrow passages known as parabronchi, connecting the dorsobronchi to the ventrobronchi at either ends of the lungs. Air flows anteriorly (caudal to cranial) through the parallel parabronchi. These parabronchi have honeycombed walls. The cells of the honeycomb are dead-end air vesicles, called atria, which project radially from the parabronchi. The atria are the site of gas exchange by simple diffusion. The blood flow around the parabronchi (and their atria), forms a cross-current gas exchanger (see diagram on the left).
All species of birds with the exception of the penguin, have a small region of their lungs devoted to "neopulmonic parabronchi". This unorganized network of microscopic tubes branches off from the posterior air sacs, and open haphazardly into both the dorso- and ventrobronchi, as well as directly into the intrapulmonary bronchi. Unlike the parabronchi, in which the air moves unidirectionally, the air flow in the neopulmonic parabronchi is bidirectional. The neopulmonic parabronchi never make up more than 25% of the total gas exchange surface of birds.
In order for birds to produce sound, they use an organ located above the lungs called the syrinx, which is composed of tracheal rings, syringeal muscles, Tympaniform membrane, and internal bony structures that contribute to the production of sound. Air then passes through this organ, resulting in the vocalization of birds. Sound can then be produced through the movement of the Tympaniform membrane. Pitch can also be changed by opening and closing of the Tympaniform membrane, allowing for higher and lower production of sound.
Circulatory system
Birds have a four-chambered heart, in common with mammals, and some reptiles (mainly the Crocodilia). This adaptation allows for an efficient nutrient and oxygen transport throughout the body, providing birds with energy to fly and maintain high levels of activity. A ruby-throated hummingbird's heart beats up to 1,200 times per minute (about 20 beats per second).
Digestive system
Crop
Many birds possess a muscular pouch along the esophagus called a crop. The crop functions to both soften food and regulate its flow through the system by storing it temporarily. The size and shape of the crop is quite variable among the birds. Members of the family Columbidae, such as pigeons, produce a nutritious crop milk which is fed to their young by regurgitation.
Proventriculus
The avian stomach is composed of two organs, the proventriculus and the gizzard that work together during digestion. The proventriculus is a rod shaped tube, which is found between the esophagus and the gizzard, that secretes hydrochloric acid and pepsinogen into the digestive tract. The acid converts the inactive pepsinogen into the active proteolytic enzyme, pepsin, which breaks down specific peptide bonds found in proteins, to produce a set of peptides, which are amino acid chains that are shorter than the original dietary protein. The gastric juices (hydrochloric acid and pepsinogen) are mixed with the stomach contents through the muscular contractions of the gizzard.
Gizzard
The gizzard is composed of four muscular bands that rotate and crush food by shifting the food from one area to the next within the gizzard. The gizzard of some species of herbivorous birds, like turkey and quails, contains small pieces of grit or stone called gastroliths that are swallowed by the bird to aid in the grinding process, serving the function of teeth. The use of gizzard stones is a similarity found between birds and dinosaurs, which left gastroliths as trace fossils.
Intestines
The partially digested and pulverized gizzard contents, now called a bolus, are passed into the intestine, where pancreatic and intestinal enzymes complete the digestion of the digestible food. The digestion products are then absorbed through the intestinal mucosa into the blood. The intestine ends via the large intestine in the vent or cloaca which serves as the common exit for renal and intestinal excrements as well as for the laying of eggs. However, unlike mammals, many birds do not excrete the bulky portions (roughage) of their undigested food (e.g. feathers, fur, bone fragments, and seed husks) via the cloaca, but regurgitate them as food pellets.
Drinking behaviour
There are three general ways in which birds drink: using gravity itself, sucking, and by using the tongue. Fluid is also obtained from food.
Most birds are unable to swallow by the "sucking" or "pumping" action of peristalsis in their esophagus (as humans do), and drink by repeatedly raising their heads after filling their mouths to allow the liquid to flow by gravity, a method usually described as "sipping" or "tipping up".
The notable exception is the family of pigeons and doves, the Columbidae; in fact, according to Konrad Lorenz in 1939:
one recognizes the order by the single behavioral characteristic, namely that in drinking the water is pumped up by peristalsis of the esophagus which occurs without exception within the order. The only other group, however, which shows the same behavior, the Pteroclidae, is placed near the doves just by this doubtlessly very old characteristic.
Although this general rule still stands, since that time, observations have been made of a few exceptions in both directions.
In addition, specialized nectar feeders like sunbirds (Nectariniidae) and hummingbirds (Trochilidae) drink
by using protrusible grooved or trough-like tongues, and parrots (Psittacidae) lap up water.
Many seabirds have glands near the eyes that allow them to drink seawater. Excess salt is eliminated from the nostrils. Many desert birds get the water that they need entirely from their food. The elimination of nitrogenous wastes as uric acid reduces the physiological demand for water, as uric acid is not very toxic and thus does not need to be diluted in as much water.
Reproductive and urogenital systems
Male birds have two testes which become hundreds of times larger during the breeding season to produce sperm. The testes in birds are generally asymmetric with most birds having a larger left testis. Female birds in most families have only one functional ovary (the left one), connected to an oviduct — although two ovaries are present in the embryonic stage of each female bird. Some species of birds have two functional ovaries, and the kiwis always retain both. Birds do not have male accessory glands.
Most male birds have no phallus. In the males of species without a phallus, sperm is stored in the seminal glomera within the cloacal protuberance prior to copulation. During copulation, the female moves her tail to the side and the male either mounts the female from behind or in front (as in the stitchbird), or moves very close to her. The cloacae then touch, so that the sperm can enter the female's reproductive tract. This can happen very fast, sometimes in less than half a second.
The sperm is stored in the female's sperm storage tubules for a period varying from a week to more than 100 days, depending on the species. Then, eggs will be fertilized individually as they leave the ovaries, before the shell is calcified in the oviduct. After the egg is laid by the female, the embryo continues to develop in the egg outside the female body.
Many waterfowl and some other birds, such as the ostrich and turkey, possess a phallus. This appears to be the ancestral condition among birds; most birds have lost the phallus. The length is thought to be related to sperm competition in species that usually mate many times in a breeding season; sperm deposited closer to the ovaries is more likely to achieve fertilization. The longer and more complicated phalli tend to occur in waterfowl whose females have unusual anatomical features of the vagina (such as dead end sacs and clockwise coils). These vaginal structures may be used to prevent penetration by the male phallus (which coils counter-clockwise). In these species, copulation is often violent and female co-operation is not required; the female ability to prevent fertilization may allow the female to choose the father for her offspring. When not copulating, the phallus is hidden within the proctodeum compartment within the cloaca, just inside the vent.
After the eggs hatch, parents provide varying degrees of care in terms of food and protection. Precocial birds can care for themselves independently within minutes of hatching; altricial hatchlings are helpless, blind, and naked, and require extended parental care. The chicks of many ground-nesting birds such as partridges and waders are often able to run virtually immediately after hatching; such birds are referred to as nidifugous. The young of hole-nesters, though, are often totally incapable of unassisted survival. The process whereby a chick acquires feathers until it can fly is called "fledging".
Some birds, such as pigeons, geese, and red-crowned cranes, remain with their mates for life and may produce offspring on a regular basis.
Kidney
Avian kidneys function in almost the same way as the more extensively studied mammalian kidney, but with a few important adaptations; while much of the anatomy remains unchanged in design, some important modifications have occurred during their evolution.
The three-sectioned kidneys are placed on the bilateral side of the vertebral column, and there are connected to the lower gastrointestinal tract. Depending on the bird species, the cortex makes up around 71–80% of the kidney's mass, while the medulla is much smaller at about 5–15% of the mass. Blood vessels and other tubes make up the remaining mass.
Unique to birds is the presence of two different types of nephrons (the functional unit of the kidney): both reptilian-like nephrons located in the cortex; and mammalian-like nephrons located in the medulla.
Reptilian nephrons are more abundant but lack the distinctive loops of Henle seen in mammals. Because of the absence of the loop of Henle in birds, their ability to concentrate water does not depend heavily on it. Water reabsorption depends entirely on the coprodeum and the rectum.
The urine collected by the kidney is emptied into the cloaca through the ureters and then to the colon by reverse peristalsis.
Nervous system
Brain
Birds have a large brain to body mass ratio. This is reflected in the advanced and complex bird intelligence.
Vision
Birds have acute eyesight—raptors (birds of prey) have vision eight times sharper than humans—thanks to higher densities of photoreceptors in the retina (up to 1,000,000 per square mm in Buteos, compared to 200,000 for humans), a high number of neurons in the optic nerves, a second set of eye muscles not found in other animals, and, in some cases, an indented fovea which magnifies the central part of the visual field. Many species, including hummingbirds and albatrosses, have two foveas in each eye. Many birds can detect polarised light.
Hearing
The avian ear is adapted to pick up on slight and rapid changes of pitch found in bird song. General avian tympanic membrane form is ovular and slightly conical. Morphological differences in the middle ear are observed between species. Ossicles within green finches, blackbirds, song thrushes, and house sparrows are proportionately shorter to those found in pheasants, Mallard ducks, and sea birds. In song birds, a syrinx allows the respective possessors to create intricate melodies and tones. The middle avian ear is made up of three semicircular canals, each ending in an ampulla and joining to connect with the macula sacculus and lagena, of which the cochlea, a straight short tube to the external ear, branches from.
Taste
Birds evolved from an ancestor that had lost the taste bud type called T1R2, which allows other animals, like alligators, to taste sweet. After many birds adapted to a diet with high sugar content, they modified their umami taste receptors (T1R1-T1R3) to also sense sweet tastes. TR2, used to detect bitter tastes, is reduced in birds.
Immune system
The immune system of birds resembles that of other jawed vertebrates. Birds have both innate and adaptive immune systems. Birds are susceptible to tumours, immune deficiency and autoimmune diseases.
Bursa of Fabricius
Function
The bursa of Fabricius, also known as the cloacal bursa, is a lymphoid organ which aids in the production of B lymphocytes during humoral immunity. The bursa of Fabricius is present during juvenile stages but curls up. For example, the bursa is not visible after sexual maturity in different species of sparrow. For comparison, B lymphocytes in mammals are developed in the bone marrow.
Anatomy
The bursa of Fabricius is a circular pouch connected to the superior dorsal side of the cloaca. The bursa is composed of many folds, known as plica, which are lined by more than 10,000 follicles encompassed by connective tissue and surrounded by mesenchyme. Each follicle consists of a cortex that surrounds a medulla. The cortex houses the highly compacted B lymphocytes, whereas the medulla houses lymphocytes loosely. The medulla is separated from the lumen by the epithelium and this aids in the transport of epithelial cells into the lumen of the bursa. There are 150,000 B lymphocytes located around each follicle.
| Biology and health sciences | Basic anatomy | Biology |
8774050 | https://en.wikipedia.org/wiki/Telecommunications%20engineering | Telecommunications engineering | Telecommunications engineering is a subfield of electronics engineering which seeks to design and devise systems of communication at a distance. The work ranges from basic circuit design to strategic mass developments. A telecommunication engineer is responsible for designing and overseeing the installation of telecommunications equipment and facilities, such as complex electronic switching system, and other plain old telephone service facilities, optical fiber cabling, IP networks, and microwave transmission systems. Telecommunications engineering also overlaps with broadcast engineering.
Telecommunication is a diverse field of engineering connected to electronic, civil and systems engineering. Ultimately, telecom engineers are responsible for providing high-speed data transmission services. They use a variety of equipment and transport media to design the telecom network infrastructure; the most common media used by wired telecommunications today are twisted pair, coaxial cables, and optical fibers. Telecommunications engineers also provide solutions revolving around wireless modes of communication and information transfer, such as wireless telephony services, radio and satellite communications, internet, Wi-Fi and broadband technologies.
History
Telecommunication systems are generally designed by telecommunication engineers which sprang from technological improvements in the telegraph industry in the late 19th century and the radio and the telephone industries in the early 20th century. Today, telecommunication is widespread and devices that assist the process, such as the television, radio and telephone, are common in many parts of the world. There are also many networks that connect these devices, including computer networks, public switched telephone network (PSTN), radio networks, and television networks. Computer communication across the Internet is one of many examples of telecommunication. Telecommunication plays a vital role in the world economy, and the telecommunication industry's revenue has been placed at just under 3% of the gross world product.
Telegraph and telephone
Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837. Soon after he was joined by Alfred Vail who developed the register — a telegraph terminal that integrated a logging device for recording messages to paper tape. This was demonstrated successfully over three miles (five kilometres) on 6 January 1838 and eventually over forty miles (sixty-four kilometres) between Washington, D.C. and Baltimore on 24 May 1844. The patented invention proved lucrative and by 1851 telegraph lines in the United States spanned over 20,000 miles (32,000 kilometres).
The first successful transatlantic telegraph cable was completed on 27 July 1866, allowing transatlantic telecommunication for the first time. Earlier transatlantic cables installed in 1857 and 1858 only operated for a few days or weeks before they failed. The international use of the telegraph has sometimes been dubbed the "Victorian Internet".
The first commercial telephone services were set up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London. Alexander Graham Bell held the master patent for the telephone that was needed for such services in both countries. The technology grew quickly from this point, with inter-city lines being built and telephone exchanges in every major city of the United States by the mid-1880s. Despite this, transatlantic voice communication remained impossible for customers until January 7, 1927, when a connection was established using radio. However no cable connection existed until TAT-1 was inaugurated on September 25, 1956, providing 36 telephone circuits.
In 1880, Bell and co-inventor Charles Sumner Tainter conducted the world's first wireless telephone call via modulated lightbeams projected by photophones. The scientific principles of their invention would not be utilized for several decades, when they were first deployed in military and fiber-optic communications.
Radio and television
Over several years starting in 1894, the Italian inventor Guglielmo Marconi built the first complete, commercially successful wireless telegraphy system based on airborne electromagnetic waves (radio transmission). In December 1901, he would go on to established wireless communication between Britain and Newfoundland, earning him the Nobel Prize in physics in 1909 (which he shared with Karl Braun). In 1900, Reginald Fessenden was able to wirelessly transmit a human voice. On March 25, 1925, Scottish inventor John Logie Baird publicly demonstrated the transmission of moving silhouette pictures at the London department store Selfridges. In October 1925, Baird was successful in obtaining moving pictures with halftone shades, which were by most accounts the first true television pictures. This led to a public demonstration of the improved device on 26 January 1926 again at Selfridges. Baird's first devices relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of semi-experimental broadcasts done by the British Broadcasting Corporation beginning September 30, 1929.
Satellite
The first U.S. satellite to relay communications was Project SCORE in 1958, which used a tape recorder to store and forward voice messages. It was used to send a Christmas greeting to the world from U.S. President Dwight D. Eisenhower. In 1960 NASA launched an Echo satellite; the aluminized PET film balloon served as a passive reflector for radio communications. Courier 1B, built by Philco, also launched in 1960, was the world's first active repeater satellite. Satellites these days are used for many applications such as uses in GPS, television, internet and telephone uses.
Telstar was the first active, direct relay commercial communications satellite. Belonging to AT&T as part of a multi-national agreement between AT&T, Bell Telephone Laboratories, NASA, the British General Post Office, and the French National PTT (Post Office) to develop satellite communications, it was launched by NASA from Cape Canaveral on July 10, 1962, the first privately sponsored space launch. Relay 1 was launched on December 13, 1962, and became the first satellite to broadcast across the Pacific on November 22, 1963.
The first and historically most important application for communication satellites was in intercontinental long distance telephony. The fixed Public Switched Telephone Network relays telephone calls from land line telephones to an earth station, where they are then transmitted a receiving satellite dish via a geostationary satellite in Earth orbit. Improvements in submarine communications cables, through the use of fiber-optics, caused some decline in the use of satellites for fixed telephony in the late 20th century, but they still exclusively service remote islands such as Ascension Island, Saint Helena, Diego Garcia, and Easter Island, where no submarine cables are in service. There are also some continents and some regions of countries where landline telecommunications are rare to nonexistent, for example Antarctica, plus large regions of Australia, South America, Africa, Northern Canada, China, Russia and Greenland.
After commercial long distance telephone service was established via communication satellites, a host of other commercial telecommunications were also adapted to similar satellites starting in 1979, including mobile satellite phones, satellite radio, satellite television and satellite Internet access. The earliest adaption for most such services occurred in the 1990s as the pricing for commercial satellite transponder channels continued to drop significantly.
Computer networks and the Internet
On 11 September 1940, George Stibitz was able to transmit problems using teleprinter to his Complex Number Calculator in New York and receive the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer or mainframe computer with remote "dumb terminals" remained popular throughout the 1950s and into the 1960s. However, it was not until the 1960s that researchers started to investigate packet switching — a technology that allows chunks of data to be sent between different computers without first passing through a centralized mainframe. A four-node network emerged on 5 December 1969. This network soon became the ARPANET, which by 1981 would consist of 213 nodes.
ARPANET's development centered around the Request for Comment process and on 7 April 1969, RFC 1 was published. This process is important because ARPANET would eventually merge with other networks to form the Internet, and many of the communication protocols that the Internet relies upon today were specified through the Request for Comment process. In September 1981, RFC 791 introduced the Internet Protocol version 4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today.
Optical fiber
Optical fiber can be used as a medium for telecommunication and computer networking because it is flexible and can be bundled into cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters.
In 1966 Charles K. Kao and George Hockham proposed optical fibers at STC Laboratories (STL) at Harlow, England, when they showed that the losses of 1000 dB/km in existing glass (compared to 5-10 dB/km in coaxial cable) was due to contaminants, which could potentially be removed.
Optical fiber was successfully developed in 1970 by Corning Glass Works, with attenuation low enough for communication purposes (about 20dB/km), and at the same time GaAs (Gallium arsenide) semiconductor lasers were developed that were compact and therefore suitable for transmitting light through fiber optic cables for long distances.
After a period of research starting from 1975, the first commercial fiber-optic communications system was developed, which operated at a wavelength around 0.8 μm and used GaAs semiconductor lasers. This first-generation system operated at a bit rate of 45 Mbps with repeater spacing of up to 10 km. Soon on 22 April 1977, General Telephone and Electronics sent the first live telephone traffic through fiber optics at a 6 Mbit/s throughput in Long Beach, California.
The first wide area network fibre optic cable system in the world seems to have been installed by Rediffusion in Hastings, East Sussex, UK in 1978. The cables were placed in ducting throughout the town, and had over 1000 subscribers. They were used at that time for the transmission of television channels, not available because of local reception problems.
The first transatlantic telephone cable to use optical fiber was TAT-8, based on Desurvire optimized laser amplification technology. It went into operation in 1988.
In the late 1990s through 2000, industry promoters, and research companies such as KMI, and RHK predicted massive increases in demand for communications bandwidth due to increased use of the Internet, and commercialization of various bandwidth-intensive consumer services, such as video on demand, Internet Protocol data traffic was increasing exponentially, at a faster rate than integrated circuit complexity had increased under Moore's Law.
Concepts
Basic elements of a telecommunication system
Transmitter
Transmitter (information source) that takes information and converts it to a signal for transmission. In electronics and telecommunications a transmitter or radio transmitter is an electronic device which, with the aid of an antenna, produces radio waves. In addition to their use in broadcasting, transmitters are necessary component parts of many electronic devices that communicate by radio, such as cell phones,
Transmission medium
Transmission medium over which the signal is transmitted. For example, the transmission medium for sounds is usually air, but solids and liquids may also act as transmission media for sound. Many transmission media are used as communications channel. One of the most common physical media used in networking is copper wire. Copper wire is used to carry signals to long distances using relatively low amounts of power. Another example of a physical medium is optical fiber, which has emerged as the most commonly used transmission medium for long-distance communications. Optical fiber is a thin strand of glass that guides light along its length.
The absence of a material medium in vacuum may also constitute a transmission medium for electromagnetic waves such as light and radio waves.
Receiver
Receiver (information sink) that receives and converts the signal back into required information. In radio communications, a radio receiver is an electronic device that receives radio waves and converts the information carried by them to a usable form. It is used with an antenna. The information produced by the receiver may be in the form of sound (an audio signal), images (a video signal) or digital data.
Wired communication
Wired communications make use of underground communications cables (less often, overhead lines), electronic signal amplifiers (repeaters) inserted into connecting cables at specified points, and terminal apparatus of various types, depending on the type of wired communications used.
Wireless communication
Wireless communication involves the transmission of information over a distance without help of wires, cables or any other forms of electrical conductors. Wireless operations permit services, such as long-range communications, that are impossible or impractical to implement with the use of wires. The term is commonly used in the telecommunications industry to refer to telecommunications systems (e.g. radio transmitters and receivers, remote controls etc.) which use some form of energy (e.g. radio waves, acoustic energy, etc.) to transfer information without the use of wires. Information is transferred in this manner over both short and long distances.
Roles
Telecom equipment engineer
A telecom equipment engineer is an electronics engineer that designs equipment such as routers, switches, multiplexers, and other specialized computer/electronics equipment designed to be used in the telecommunication network infrastructure.
Network engineer
A network engineer is a computer engineer who is in charge of designing, deploying and maintaining computer networks. In addition, they oversee network operations from a network operations center, designs backbone infrastructure, or supervises interconnections in a data center.
Central-office engineer
A central-office engineer is responsible for designing and overseeing the implementation of telecommunications equipment in a central office (CO for short), also referred to as a wire center or telephone exchange A CO engineer is responsible for integrating new technology into the existing network, assigning the equipment's location in the wire center, and providing power, clocking (for digital equipment), and alarm monitoring facilities for the new equipment. The CO engineer is also responsible for providing more power, clocking, and alarm monitoring facilities if there are currently not enough available to support the new equipment being installed. Finally, the CO engineer is responsible for designing how the massive amounts of cable will be distributed to various equipment and wiring frames throughout the wire center and overseeing the installation and turn up of all new equipment.
Sub-roles
As structural engineers, CO engineers are responsible for the structural design and placement of racking and bays for the equipment to be installed in as well as for the plant to be placed on.
As electrical engineers, CO engineers are responsible for the resistance, capacitance, and inductance (RCL) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. Attenuation or gradual loss in intensity and loop loss calculations are required to determine cable length and size required to provide the service called for. In addition, power requirements have to be calculated and provided to power any electronic equipment being placed in the wire center.
Overall, CO engineers have seen new challenges emerging in the CO environment. With the advent of Data Centers, Internet Protocol (IP) facilities, cellular radio sites, and other emerging-technology equipment environments within telecommunication networks, it is important that a consistent set of established practices or requirements be implemented.
Installation suppliers or their sub-contractors are expected to provide requirements with their products, features, or services. These services might be associated with the installation of new or expanded equipment, as well as the removal of existing equipment.
Several other factors must be considered such as:
Regulations and safety in installation
Removal of hazardous material
Commonly used tools to perform installation and removal of equipment
Outside-plant engineer
Outside plant (OSP) engineers are also often called field engineers, because they frequently spend much time in the field taking notes about the civil environment, aerial, above ground, and below ground. OSP engineers are responsible for taking plant (copper, fiber, etc.) from a wire center to a distribution point or destination point directly. If a distribution point design is used, then a cross-connect box is placed in a strategic location to feed a determined distribution area.
The cross-connect box, also known as a serving area interface, is then installed to allow connections to be made more easily from the wire center to the destination point and ties up fewer facilities by not having dedication facilities from the wire center to every destination point. The plant is then taken directly to its destination point or to another small closure called a terminal, where access can also be gained to the plant, if necessary. These access points are preferred as they allow faster repair times for customers and save telephone operating companies large amounts of money.
The plant facilities can be delivered via underground facilities, either direct buried or through conduit or in some cases laid under water, via aerial facilities such as telephone or power poles, or via microwave radio signals for long distances where either of the other two methods is too costly.
Sub-roles
As structural engineers, OSP engineers are responsible for the structural design and placement of cellular towers and telephone poles as well as calculating pole capabilities of existing telephone or power poles onto which new plant is being added. Structural calculations are required when boring under heavy traffic areas such as highways or when attaching to other structures such as bridges. Shoring also has to be taken into consideration for larger trenches or pits. Conduit structures often include encasements of slurry that needs to be designed to support the structure and withstand the environment around it (soil type, high traffic areas, etc.).
As electrical engineers, OSP engineers are responsible for the resistance, capacitance, and inductance (RCL) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. Attenuation or gradual loss in intensity and loop loss calculations are required to determine cable length and size required to provide the service called for. In addition power requirements have to be calculated and provided to power any electronic equipment being placed in the field. Ground potential has to be taken into consideration when placing equipment, facilities, and plant in the field to account for lightning strikes, high voltage intercept from improperly grounded or broken power company facilities, and from various sources of electromagnetic interference.
As civil engineers, OSP engineers are responsible for drafting plans, either by hand or using Computer-aided design (CAD) software, for how telecom plant facilities will be placed. Often when working with municipalities trenching or boring permits are required and drawings must be made for these. Often these drawings include about 70% or so of the detailed information required to pave a road or add a turn lane to an existing street. Structural calculations are required when boring under heavy traffic areas such as highways or when attaching to other structures such as bridges. As civil engineers, telecom engineers provide the modern communications backbone for all technological communications distributed throughout civilizations today.
Unique to telecom engineering is the use of air-core cable which requires an extensive network of air handling equipment such as compressors, manifolds, regulators and hundreds of miles of air pipe per system that connects to pressurized splice cases all designed to pressurize this special form of copper cable to keep moisture out and provide a clean signal to the customer.
As political and social ambassador, the OSP engineer is a telephone operating company's face and voice to the local authorities and other utilities. OSP engineers often meet with municipalities, construction companies and other utility companies to address their concerns and educate them about how the telephone utility works and operates. Additionally, the OSP engineer has to secure real estate in which to place outside facilities, such as an easement to place a cross-connect box.
| Technology | Disciplines | null |
14610284 | https://en.wikipedia.org/wiki/Insect%20mouthparts | Insect mouthparts | Insects have mouthparts that may vary greatly across insect species, as they are adapted to particular modes of feeding. The earliest insects had chewing mouthparts. Most specialisation of mouthparts are for piercing and sucking, and this mode of feeding has evolved a number of times independently. For example, mosquitoes (which are true flies) and aphids (which are true bugs) both pierce and suck, though female mosquitoes feed on animal blood whereas aphids feed on plant fluids.
Evolution
Like most external features of arthropods, the mouthparts of Hexapoda are highly derived. Insect mouthparts show a multitude of different functional mechanisms across the wide diversity of insect species. It is common for significant homology to be conserved, with matching structures forming from matching primordia, and having the same evolutionary origin. However, even if structures are almost physically and functionally identical, they may not be homologous; their analogous functions and appearance might be the product of convergent evolution.
Chewing insects
Examples of chewing insects include dragonflies, grasshoppers and beetles. Some insects do not have chewing mouthparts as adults but chew solid food in their larval phase. The moths and butterflies are major examples of such adaptations.
Mandible
A chewing insect has a pair of mandibles, one on each side of the head. The mandibles are caudal to the labrum and anterior to the maxillae. Typically the mandibles are the largest and most robust mouthparts of a chewing insect, and it uses them to masticate (cut, tear, crush, chew) food items. Two sets of muscles move the mandibles in the coronal plane of the mouth: abductor muscles move insects' mandibles apart (laterally); adductor muscles bring them together (medially). They do this mainly in opening and closing their jaws in feeding, but also in using the mandibles as tools, or possibly in fighting.
In carnivorous chewing insects, the mandibles commonly are particularly serrated and knife-like, and often with piercing points. In herbivorous chewing insects mandibles tend to be broader and flatter on their opposing faces, as for example in caterpillars.
In males of some species, such as of Lucanidae and some Cerambycidae, the mandibles are modified to such an extent that they do not serve any feeding function, but are instead used to defend mating sites from other males. In some ants and termites, the mandibles also serve a defensive function (particularly in soldier castes). In bull ants, the mandibles are elongate and toothed, used both as hunting and defensive appendages. In bees, that feed primarily by the use of a proboscis, the primary use of the mandibles is to manipulate and shape wax, and many paper wasps have mandibles adapted to scraping and ingesting wood fibres.
Maxilla
Situated beneath (caudal to) the mandibles, paired maxillae manipulate and, in chewing insects, partly masticate, food. Each maxilla consists of two parts, the proximal cardo (plural cardines), and distal stipes (plural stipites). At the apex of each stipes are two lobes, the inner lacinia and outer galea (plurals laciniae and galeae). At the outer margin, the typical galea is a cupped or scoop-like structure, located over the outer edge of the labium. In non-chewing insects, such as adult Lepidoptera, the maxillae may be drastically adapted to other functions.
Unlike the mandibles, but like the labium, the maxillae bear lateral palps on their stipites. These palps serve as organs of touch and taste in feeding and in the inspection of potential foods and/or prey.
In chewing insects, adductor and abductor muscles extend from inside the cranium to within the bases of the stipites and cardines much as happens with the mandibles in feeding, and also in using the maxillae as tools. To some extent the maxillae are more mobile than the mandibles, and the galeae, laciniae, and palps also can move up and down somewhat, in the sagittal plane, both in feeding and in working, for example in nest building by mud-dauber wasps.
Maxillae in most insects function partly like mandibles in feeding, but they are more mobile and less heavily sclerotised than mandibles, so they are more important in manipulating soft, liquid, or particulate food rather than cutting or crushing food such as material that requires the mandibles to cut or crush.
Like the mandibles, maxillae are innervated by the subesophageal ganglia.
Labium
The labium typically is a roughly quadrilateral structure, formed by paired, fused secondary maxillae. It is the major component of the floor of the mouth. Typically, together with the maxillae, the labium assists manipulation of food during mastication.
The role of the labium in some insects, however, is adapted to special functions; perhaps the most dramatic example is in the jaws of the nymphs of the Odonata, the dragonflies and damselflies. In these insects, the labium folds neatly beneath the head and thorax, but the insect can flick it out to snatch prey and bear it back to the head, where the chewing mouthparts can demolish it and swallow the particles.
The labium is attached at the rear end of the structure called cibarium, and its broad basal portion is divided into regions called the submentum, which is the proximal part, the mentum in the middle, and the prementum, which is the distal section, and furthest anterior.
The prementum bears a structure called the ligula; this consists of an inner pair of lobes called glossae and a lateral pair called paraglossae. These structures are homologous to the lacinia and galea of maxillae. The labial palps borne on the sides of labium are the counterparts of maxillary palps. Like the maxillary palps, the labial palps aid sensory function in eating. In many species the musculature of the labium is much more complex than that of the other jaws, because in most, the ligula, palps and prementum all can be moved independently.
The labium is innervated by the sub-esophageal ganglia.
In the honey bee, the labium is elongated to form a tube and tongue, and these insects are classified as having both chewing and lapping mouthparts.
The wild silk moth (Bombyx mandarina) is an example of an insect that has small labial palpi and no maxillary palpi.
Hypopharynx
The hypopharynx is a somewhat globular structure, located medially to the mandibles and the maxillae. In many species it is membranous and associated with salivary glands. It assists in swallowing the food. The hypopharynx divides the oral cavity into two parts: the cibarium or dorsal food pouch and ventral salivarium into which the salivary duct opens.
Siphoning insects
This section deals only with insects that feed by sucking fluids, as a rule without piercing their food first, and without sponging or licking. Typical examples are adult moths and butterflies. As is usually the case with insects, there are variations: some moths, such as species of Serrodes and Achaea do pierce fruit to the extent that they are regarded as serious orchard pests. Some moths do not feed after emerging from the pupa, and have greatly reduced, vestigial mouthparts or none at all. All but a few adult Lepidoptera lack mandibles (the superfamily known as the mandibulate moths have fully developed mandibles as adults), but also have the remaining mouthparts in the form of an elongated sucking tube, the proboscis.
Proboscis
The proboscis, as seen in adult Lepidoptera, is one of the defining characteristics of the morphology of the order; it is a long tube formed by the paired galeae of the maxillae. Unlike sucking organs in other orders of insects, the Lepidopteran proboscis can coil up so completely that it can fit under the head when not in use. During feeding, however, it extends to reach the nectar of flowers or other fluids. In certain specialist pollinators, the proboscis may be several times the body length of the moth.
Piercing and sucking insects
A number of insect orders (or more precisely families within them) have mouthparts that pierce food items to enable sucking of internal fluids. Some are herbivorous, like aphids and leafhoppers, while others are carnivorous, like assassin bugs and female mosquitoes. Thrips, insects of the order Thysanoptera, have unique mouthparts in that they only develop the left mandible, making their mouthparts asymmetrical. Some consider thrips to have piercing-sucking mouthparts, but others describe them as rasping-sucking.
Stylets
In female mosquitoes, all mouthparts are elongated. The labium encloses all other mouthparts, the stylets, like a sheath. The labrum forms the main feeding tube, through which blood is sucked. The sharp tips of the labrum and maxillae pierce the host's skin. During piercing, the labium remains outside the food item's skin, folding away from the stylets. Saliva containing anticoagulants, is injected into the food item and blood sucked out, each through different tubes.
Proboscis
The defining feature of the order Hemiptera is the possession of mouthparts where the mandibles and maxillae are modified into a proboscis, sheathed within a modified labium, which is capable of piercing tissues and sucking out the liquids. For example, true bugs, such as shield bugs, feed on the fluids of plants. Predatory bugs such as assassin bugs have the same mouthparts, but they are used to pierce the cuticles of captured prey.
Sponging insects
Labellum
The housefly is a typical sponging insect. The labellum's surface is covered by minute food channels, known as pseudotrachea, formed by the interlocking elongate hypopharynx and epipharynx, forming a proboscis used to channel liquid food to the oesophagus. The food channel draws liquid and liquified food to the oesophagus by capillary action. The housefly is able to eat solid food by secreting saliva and dabbing it over the food item. As the saliva dissolves the food, the solution is then drawn up into the mouth as a liquid.
| Biology and health sciences | Gastrointestinal tract | Biology |
7270635 | https://en.wikipedia.org/wiki/Yellow-throated%20marten | Yellow-throated marten | The yellow-throated marten (Martes flavigula) is a marten species native to the Himalayas, Southeast and East Asia. Its coat is bright yellow-golden, and its head and back are distinctly darker, blending together black, white, golden-yellow and brown. It is the second-largest marten in the Old World, after the Nilgiri marten, with its tail making up more than half its body length.
It is an omnivore, whose sources of food range from fruit and nectar to invertebrates, rodents, lagomorphs, reptiles and birds, and to small primates and ungulates. It is listed as Least Concern on the IUCN Red List due to its wide distribution, stable population, occurrence in a number of protected areas and an apparent lack of threats.
Characteristics
The yellow-throated marten has short bright brownish-yellow fur, a blackish brown pointed head, reddish cheeks, light brown chin and lower lips; the chest and lower part of the throat are orange-golden, and flanks and belly are bright yellowish. The back of the ears is black, the inner portions are yellowish grey. The front paws, lower forelimbs are black. The tail is black above with a greyish brown base and a lighter tip.
It is robust and muscular, has an elongated thorax, a long neck and a long tail, which is about 2/3 as long as its body. The limbs are relatively short and strong, with broad paws. The ears are large and broad with rounded tips. The soles of the feet are covered with coarse, flexible hairs, though the digital and foot pads are naked and the paws are weakly furred. The baculum is S-shaped, with four blunt processes occurring on the tip. It is larger than other Old World martens; males measure in body length, while females measure . Males weigh , while females weigh . The anal glands sport two unusual protuberances, which secrete a strong smelling liquid for defensive purposes.
Distribution and habitat
The yellow-throated marten occurs in Afghanistan and Pakistan, in the Himalayas of India, Nepal and Bhutan, continental southern China and Taiwan, the Korean Peninsula and eastern Russia. In the south, its range extends to Bangladesh, Myanmar, Thailand, the Malay Peninsula, Laos, Cambodia and Vietnam.
In Pakistan, it was recorded in Musk Deer National Park.
In Nepal's Kanchenjunga Conservation Area, it has been recorded up to an elevation of in alpine meadow.
In northeastern India, it has been reported in northern West Bengal, Arunachal Pradesh, Manipur and Assam. In Indonesia it occurs in Borneo, Sumatra, and Java.
Behaviour and ecology
The yellow-throated marten holds extensive, but not permanent, home ranges. It actively patrols its territory, having been known to cover in a single day and night. It primarily hunts on the ground, but can climb trees proficiently, being capable of making jumps up to between branches. After March snowfalls, the yellow-throated marten restricts its activities to treetops.
Diet
The yellow-throated marten is a diurnal hunter, which usually hunts in pairs, but may also hunt in packs of three or more. It preys on rats, mice, hares, snakes, lizards, eggs and ground nesting birds such as pheasants and francolins. It is reported to kill cats and poultry. It has been known to feed on human corpses, and was once thought to be able to attack an unarmed man in groups of three to four. It preys on small ungulates and smaller marten species, such as sables. In the Himalayas and Myanmar, it is reported to frequently kill muntjac fawns, while in Ussuriland the base of its diet consists of musk deer, particularly in winter. Two or three yellow-throated martens can consume a musk deer carcass in 2 to 3 days. It also kills the young of larger ungulate species within a weight range of , including young spotted deer, roe deer and goral. Wild boar piglets are also taken on occasion. It has also been reported to trail tigers and feed on their kills.
In China, it preys on giant panda cubs.
It supplements its diet with nectar and fruit, and is therefore considered to be an important seed disperser.
Reproduction
Estrus occurs twice a year, from mid-February to late March and from late June to early August. During these periods, the males fight each other for access to females. Litters typically consist of two or three kits and rarely four.
Predators
The yellow-throated marten has few predators, but occasionally may fall foul of larger carnivores; remains of sporadic individuals have turned up in the scat or stomachs of Siberian tigers (Panthera tigris) and Asian black bears (Ursus thibetanus). A mountain hawk-eagle (Nisaetus nipalensis) killed an adult yellow-throated marten.
Conservation
The yellow-throated marten is listed as Least Concern on the IUCN Red List due to its wide distribution and occurrence in protected areas across its range; the global population is stable, and threats are apparently lacking.
Taxonomy
The first written description of the yellow-throated marten in the Western World is given by Thomas Pennant in his History of Quadrupeds (1781), in which he named it "White-cheeked Weasel". Pieter Boddaert featured it in his Elenchus Animalium with the name Mustela flavigula. For a long period after the Elenchus' publication, the existence of the yellow-throated marten was considered doubtful by many zoologists, until a skin was presented to the Museum of the East India Company in 1824 by Thomas Hardwicke.
, nine subspecies are recognized.
| Biology and health sciences | Mustelidae | Animals |
7272384 | https://en.wikipedia.org/wiki/Neocallimastigomycota | Neocallimastigomycota | Neocallimastigomycota is a phylum containing anaerobic fungi, which are symbionts found in the digestive tracts of larger herbivores. Anaerobic fungi were originally placed within phylum Chytridiomycota, within Order Neocallimastigales but later raised to phylum level, a decision upheld by later phylogenetic reconstructions. It encompasses only one family.
Discovery
The fungi in Neocallimastigomycota were first recognised as fungi by Orpin in 1975, based on motile cells present in the rumen of sheep. Their zoospores had been observed much earlier but were believed to be flagellate protists, but Orpin demonstrated that they possessed a chitin cell wall. It has since been shown that they are fungi related to the core chytrids. Prior to this, the microbial population of the rumen was believed to consist only of bacteria and protozoa. Since their discovery they have been isolated from the digestive tracts of over 50 herbivores, including ruminant and non-ruminant (hindgut-fermenting) mammals and herbivorous reptiles.
Neocallimastigomycota have also been found in humans.
Circumscription
Reproduction and growth
These fungi reproduce in the rumen of ruminants through the formation of zoospores which are released from sporangia. These zoospores bear a kinetosome but lack the nonflagellated centriole known in most chytrids, and have been known to utilize horizontal gene transfer in their development of xylanase (from bacteria) and other glucanases.
The nuclear envelopes of their cells are notable for remaining intact throughout mitosis. Sexual reproduction has not been observed in anaerobic fungi. However, they are known to be able to survive for many months in aerobic environments, a factor which is important in the colonisation of new hosts. In Anaeromyces, the presence of putative resting spores has been observed but the way in which these are formed and germinate remains unknown.
Metabolism
Neocallimastigomycota lack mitochondria but instead contain hydrogenosomes in which the oxidation of NADH to NAD+, leading to formation of H2.
Polysaccharide-degrading activity
Neocallimastigomycota play an essential role in fibre-digestion in their host species. They are present in large numbers in the digestive tracts of animals which are fed on high fibre diets. The polysaccharide degrading enzymes produced by anaerobic fungi can hydrolyse the most recalcitrant plant polymers and can degrade unlignified plant cell walls entirely. Orpinomyces sp. exhibited the capacity of xylanase, CMCase, lichenase, amylase, β-xylosidase, β-glucosidase, α-Larabinofuranosidase and minor amounts of β-cellobiosidase production by utilizing avicel as the sole energy source. The polysaccharide degrading enzymes are organised into a multiprotein complex, similar to the bacterial cellulosome.
Spelling of name
The Greek termination, "-mastix", referring to "whips", i.e. the many flagella on these fungi, is changed to "-mastig-" when combined with additional terminations in Latinized names. The family name Neocallimastigaceae was originally incorrectly published as "Neocallimasticaceae" by the publishing authors which led to the coinage of the misspelled, hence incorrect "Neocallimasticales", an easily forgiven error considering that other "-ix" endings such as Salix goes to Salicaceae. Correction of these names is mandated by the International Code of Botanical Nomenclature, Art. 60. The corrected spelling is used by Index Fungorum. Both spellings occur in the literature and on the WWW as a result of the spelling in the original publication.
| Biology and health sciences | Basics | Plants |
338336 | https://en.wikipedia.org/wiki/All-terrain%20vehicle | All-terrain vehicle | An all-terrain vehicle (ATV), also known as a light utility vehicle (LUV), a quad bike or quad (if it has four wheels), as defined by the American National Standards Institute (ANSI), is a vehicle that travels on low-pressure tires, has a seat that is straddled by the operator, and has handlebars, similar to a motorcycle. As the name implies, it is designed to handle a wider variety of terrain than most other vehicles. It is street-legal in some countries, but not in most states, territories and provinces of Australia, the United States, and Canada.
By the current ANSI definition, ATVs are intended for use by a single operator, but some ATVs, referred to as tandem ATVs, have been developed for use by the driver and one passenger.
The rider sits on and operates these vehicles like a motorcycle, but the extra wheels give more stability at slower speeds. Although most are equipped with three or four wheels, six or eight wheel (tracked) models exist and existed historically for specialized applications. Multiple-user analogues with side-by-side seating are called utility terrain vehicles (UTVs) or side-by-sides to distinguish the classes of vehicle. Both classes tend to have similar powertrain parts. Engine sizes of ATVs for sale in the United States as of 2008 ranged from .
History
19th century
Royal Enfield built and sold the first powered four-wheeler in 1893. It had many bicycle components, including handlebars. The Royal Enfield resembles a modern ATV-style quad bike but was designed as a form of horseless carriage for road use.
Six-wheeled AATVs
The term "ATV" was originally coined to refer to non-straddle ridden, typically six-wheeled, amphibious ATVs, such as the Jiger produced by the Jiger Corporation, the Amphicat produced by Mobility Unlimited Inc, and the Terra Tiger produced by the Allis-Chalmers Manufacturing Company in the mid-1960s and early 1970s. With the introduction of straddle ridden ATVs, the term AATV was introduced to define the original amphibious ATV category.
Three-wheeled ATVs
The first three-wheeled ATV was the Sperry-Rand Tricart. It was designed in 1967 as a graduate project of John Plessinger at the Cranbrook Academy of Arts near Detroit. The Tricart was straddle-ridden with a sit-in rather than sit-on style (similar to the contemporaneous Big Wheel toy). In 1968 Plessinger sold the Tricart patents and design rights to Sperry-Rand New Holland who manufactured them commercially. Numerous small American manufacturers of 3-wheelers followed. These small manufacturers were unable to compete when larger motorcycle companies like Honda entered the market in 1969.
Honda introduced their first sit-on straddle-ridden three-wheeled all-terrain vehicle in 1969, known as a US90, as a 1970 Model. Variations would be popularized in the James Bond movie, Diamonds Are Forever and TV shows such as Doctor Who, Magnum, P.I. and Hart to Hart. In 1973, Honda trademarked the term "All Terrain Cycle" (ATC), applying it to all Honda's three-wheeled ATVs; it became a universal name associated with all vehicles of this type. It was directly influenced by earlier 6-wheeled AATVs of the sixties, and utilized balloon tires for both a low environmental impact and to compensate for a lack of mechanical suspension.
Honda entered the 1980s with a virtual monopoly in the market, due to effective patents on design and engine placement. By 1980, other companies paid patent royalties to Honda to enter the lucrative ATC field with their own machines. Yamaha introduced their first ATC, the Tri-Moto YZ125. Kawasaki followed suit the next year with the KLT200, while Suzuki produced their first model, the ALT125, in 1982. As the popularity of ATCs increased dramatically, rapid development ensued. The ability to go anywhere on terrain that most other vehicles could not cross soon made them popular with US and Canadian hunters. As other manufacturers were entering the market, Honda was diversifying, offering the ATC250R, the first Sport ATC intended for competition, in 1981. The 1982 Honda ATC200E Big Red was a landmark model. It featured both suspension and racks, making it the first ATC designed specifically for utility, and would become the world's best-selling ATC. Honda followed that effort in 1983 with the ATC200X, an easy-to-handle four-stroke Sport ATC that was ideal for new riders.
Not to be outdone, Kawasaki and Yamaha responded with their own Sport ATCs. 1984 saw the release of the Kawasaki KXT250 Tecate, and Yamaha followed in 1985 with the Tri-Z 250. Both were liquid Cooled 250 cc two-strokes capable of giving the Honda ATC250R competition. In response to growing market, American Specialty manufacturer Tiger also introduced a series of ATCs, hand-built-to-order models that included the Tiger 500, the largest displacement ATC produced commercially. While Kawasaki and Yamaha both produced utility ATCs, famously making the KLT 250 Police and Yamahauler respectively, Suzuki turned their attention to building Sport Quads.
Honda continued to diversify their line-up (at peak offering 10 distinct models), releasing the larger, fully suspended 250 cc Big Red utility ATC, and introduced the 350X Sport ATC, their largest displacement machine, in 1985. But the bulk of their sales were the 200cc line, offering six models and selling over 500,000 units in 3 years. Honda's response to the Tecate and Tri-Z, the liquid cooled 1985 and 1986 ATC250R, remains one of the most desirable ATCs of the era, and aftermarket support still follows the machine.
U.S. manufacturers
Main articles: Tiger ATV LTD and Polaris Scrambler 250R/es
American-based manufacturers also produced ATCs in this period, albeit in small numbers. Polaris offered the Scrambler in 1985 and 1986, producing appx 1600 units. Speciality manufacturer Tiger ATV also produced a range of ATCs, but their liquidation in 1991 left no official record of how many units were produced. The collector ATV market estimates vary drastically, from 300 to as many as 1000 units total production. Tiger ATCs were offered for three years, with models using two-stroke engines provided by KTM and Rotax.
The Tiger 500 is notable for being the fastest consumer ATC available, with tested top speeds of +80 mph from the stock @6500rpm engine and 5 speed gearbox. With final drive gearing changes, the ATC could exceed 100 mph. However, due to the rarity of the machines the brand never became well-known, and as all Tiger Models were custom-ordered and built to the buyer's specifications for the purpose of factory ATV racing, Polaris is generally known as the first American 'Production' ATV producer.
Production pause
Production of three-wheelers was voluntarily ceased by all manufacturers by 1987, in light of safety concerns, and ahead of any legislation. Though later studies showed that three-wheelers were not considered more unstable than four-wheelers (although accidents are equally severe in both classes), manufacturers agreed to a 10-year moratorium on production, and to collectively financing a +US$100 million ATV safety campaign. Despite the moratorium lifting, manufacturers did not return to the ATC market, focusing instead on four-wheeled ATVs. A ban on sales of new or used three-wheelers and a recall of all remaining three-wheelers was proposed by the American Academy of Pediatrics.
These safety issues with three-wheel ATCs caused a shift in the buying public, as the sales of later four-wheel ATV models grew rapidly. While three-wheel models ended production in 1987, agreements between the major manufacturers and the US U.S. Consumer Product Safety Commission to officially cease production and finance safety campaigns moved forward. While the lighter weight of ATCs made them popular with certain riders, manufacturers continued to focus on ATV production.
US Public Law 110-314
US Public Law 110-314, enacted on August 8, 2008, separated ATCs from existing new production ATV safety standards, and requires new standards for three-wheeled ATCs to be drafted. This effectively suspended importation of three-wheeled ATCs, until new standards of safety can be drafted. As of 2020, such standards have not been drafted. While search engines can find informal information suggesting major Japanese manufacturers pressed for this measure due to an influx of inexpensive Chinese ATCs in the American market, no official documentation or cited sources support these claims. Currently, all manufacturers not based in the US are restricted from the manufacture and sale of new Three-Wheeled All Terrain Cycles, until safety standards can be implemented. Below is a summary of Section 232, which effects the ATC ban.
Four-wheeled ATVs (since 1980)
Suzuki was a leader in the development of mass production four-wheeled ATVs. It sold the first model, the 1982 QuadRunner LT125, which was a recreational machine for beginners.
Adventure Vehicles of Monroe, Louisiana made the first quad ATV in 1980. They called it the Avenger 400. Prior to that, Adventure Vehicles made 3 wheel ATVs and a dump body utility 3 wheeler using Kohler 8 hp engines and Comet drive systems (Comet centrifugal belt-driven clutch, and a Comet forward, neutral, reverse transaxle, with a rigid rear axle or rear differential option.) The Avenger 400 was a rigid suspension vehicle with a fiberglass body and welded tube construction. It was a rudimentary vehicle reminiscent of the Tote Gote of the 1960s.
Suzuki sold the first four-wheeled mini ATV, the LT50, from 1984 to 1987. After the LT50, Suzuki sold the first ATV with a CVT transmission, the LT80, from 1987 to 2006.
In 1985 Suzuki introduced to the industry the first high-performance four-wheel ATV, the Suzuki LT250R QuadRacer. This machine was in production for the 1985–1992 model years. During its production run, it underwent three major engineering makeovers. However, the core features were retained. These were: a sophisticated long-travel suspension, a liquid-cooled, two-stroke motor, and a fully-manual five-speed transmission (for 1985–1986 models), and a six-speed transmission (for the '87–'92 models). It was a machine exclusively designed for racing by highly skilled riders.
Honda responded a year later with the FourTrax TRX250R—a machine that has not been replicated until recently. It currently remains a trophy winner and competitor to big-bore ATVs. Kawasaki responded with its Tecate-4 250. The TRX250R was very similar to the ATC250R it eventually replaced and is often considered one of the greatest sport ATVs ever built.
In 1987, Yamaha Motor Company introduced a different type of high-performance machine, the Banshee 350, which featured a twin-cylinder liquid-cooled two-stroke motor from the RD350LC street motorcycle. Heavier and more difficult to ride in the dirt than the 250s, the Banshee became a popular machine with sand dune riders thanks to its unique power delivery. The Banshee remains popular, but 2006 is the last year it was available in the U.S. (due to EPA emissions regulations); it remained available in Canada until 2008 and in Australia until 2012. The Warrior 350 was introduced in 1987 and went on for years as a light and fast ATV.
Shortly after the introduction of the Banshee in 1987, Suzuki released the LT500R QuadRacer. This unique quad was powered by a 500 cc liquid-cooled two-stroke engine with a five-speed transmission. This ATV earned the nickname "Quadzilla" with its remarkable amount of speed and size. While there are claims of 100+ mph () stock Quadzillas, it was officially recorded by 3&4 Wheel Action magazine as reaching a top speed of over in a high-speed shootout in its 1988 June issue, making it the fastest production four-wheeled ATV ever produced. Suzuki discontinued the production of the LT500R in 1990 after just four years.
At the same time, the development of utility ATVs was rapidly escalating. The 1986 Honda FourTrax TRX350 4x4 ushered in the era of four-wheel-drive ATVs. Other manufacturers quickly followed suit, and 4x4s have remained the most popular type of ATV ever since. These machines are popular with hunters, farmers, ranchers, and workers at construction sites.
Models are divided into the sport and utility markets. Sport models are generally small, light, two-wheel-drive vehicles that accelerate quickly, have a manual transmission, and run at speeds up to approximately . Utility models are generally bigger four-wheel-drive vehicles with a maximum speed of up to approximately . They can haul small loads on attached racks or small dump beds, and tow small trailers. Due to the different weights, each has advantages on different types of terrain. A popular model is Yamaha's Raptor 700, which has a nearly 700 cc four-stroke engine.
Six-wheel models often have a small dump bed, with an extra set of wheels at the back to increase the payload capacity. They can be either four-wheel-drive (back wheels driving only), or six-wheel-drive.
In 2011 LandFighter was founded, "the first Dutch/European ATV brand". Most production is in Taiwan, to European standards; the ATVs are finally assembled in the Netherlands.
Safety and legal regulation
Safety helmets, mandatory in some jurisdictions, provide the rider with some protection in the case of an accident.
Three-wheel vehicles
Safety courses and educational literature reduced the number and severity of accidents among ATC and ATV riders. As cornering is more challenging on a three-wheel ATC than with a four-wheeled machine, properly leaning into the turn is required to counterbalance the weight and keep the machine stable. Careless operators may roll over at high speeds. The lighter front-end and smaller footprint of ATCs present both a flipping and steering hazard under acceleration and on inclines. Lateral rollovers may also occur when traversing steep inclines. ATCs require unique techniques to ride properly, and turning lean requires more exaggeration than ATVs; Throttle steering is another technique commonly used on ATCs in soft terrain and at high speeds, leaning to the inside of the turn and manipulating the throttle to break traction with the rear tires, resulting in the machines turning on axis while maintaining a forward direction.
Four-wheel vehicles
Four-wheel quad cycles have a tendency to roll over, often injuring or killing the rider. Suitably designed roll bars can be fitted, which do not prevent the cycle from overturning, but prop it up, providing some protection for the rider.
Studies and protective equipment
Safety has been a major issue with ATVs due to the many deaths and injuries associated with them and the lack of protection due to the absence of a rigid cab.
Modern ATVs were introduced in the early 1970s, with almost immediate alarming injury rates for children and adolescents. Based on analysis of the National Trauma Data Bank, ATVs are more dangerous than dirt bikes, possibly due to crush injuries and failure to wear safety gear such as helmets. They are as dangerous as motorcycles, based on mortality and injury scores. More children and women, who are less likely to wear helmets, are injured on ATVs.
Many common injuries can be prevented with the use of proper protective equipment. Most ATV manufacturers recommend at least a suitable DOT-approved helmet, protective eyewear, gloves and suitable riding boots for all riding conditions. Sport or aggressive riders, or riders on challenging terrain (such as those rock crawling or hillclimbing), may opt for a motocross-style chest protector and knee/shin guards for further protection. Use of tires suited to a particular terrain can also play a vital role in preventing injuries. Fatal accidents typically occur when the vehicle rolls over. Wearing a helmet can reduce the risk of death by 42% and the risk of nonfatal head injury by 64%.
United States
In the United States, statistics released by the Consumer Product Safety Commission (CPSC) show that in 2016-2020, an estimated 526,900 emergency department treated injuries were associated with off-highway vehicles (an annual average of 105,400), and up to half of these injuries involved children and youth under age 25. From 2016-2018, 1591 people died in ATV-associated incidents, with 28% involving children and youth under age 25. Focus has shifted to machine size balanced with the usage of ATVs categorized by age ranges and engine displacements—in line with the consent decrees. ATVs are mandated to bear a label from the manufacturer stating that the use of machines greater than 90 cc by riders under the age of 12 is prohibited. This is a 'manufacturer/CPSC recommendation' and not necessarily state law.
The American Academy of Pediatrics and the CPSC recommended that children under the age of 16 should not ride ATVs. In the United States, about 40,000 children under age 16 are treated in emergency departments for ATV-related injuries each year. A Canadian study stated that "associated injury patterns, severity, and costs to the healthcare system" of pediatric injuries associated to ATVs resemble those caused by motor vehicles, and that public policies should reflect this. Helmets are underutilized and result in Glasgow Coma Scale scores in children presenting from ATV accidents being similar to those in motorcycle accidents.
The Consumer Product Safety Commission met in March 2005 to discuss the dangers of ATVs. Data from 2004 showed 44,000 injuries and almost 150 fatalities in children while riding ATVs. In response to calls for further regulation, the CPSC's director of compliance, John Gibson Mullan, said that because the statistics were not rising, existing measures were working. The New York Times reported an accusation from a staff member that Mullan, who had previously worked as a lawyer for the ATV industry, had distorted the statistics and prevented further debate.
The United States government maintains a website about the safety of ATVs where safety tips are provided, such as not driving ATVs with a passenger (passengers make it difficult or impossible for the driver to shift their weight, as required to drive an ATV) or not driving ATVs on paved roads (ATVs usually have a solid rear axle with no differential).
In 1988, the All-terrain Vehicle Safety Institute (ASI) was formed to provide training and education for ATV riders. The cost of attending the training is minimal and is free for purchasers of new machines that fall within the correct age and size guidelines. Successful completion of a safety training class is, in many states, a minimum requirement for minors to be granted permission to ride on public land. Some states have had to implement their own safety training programs, as the ASI program cannot include those riders with ATVs outside of the age and size guidelines, which may still fall within the states' laws.
From January 1, 2019, the United States Consumer Product Safety Commission updated ATV lighting requirements, requiring all categories of ATVs to be equipped with a stop lamp and side reflectors similar to those required on passenger cars.
In industry, agriculture workers are disproportionately at risk for ATV accidents. Most fatalities occur in white men over the age of 55.
Street legality
ATV's have the ability to be officially street legal and can be used on at least certain roads in 21 states, requirements vary by state however.
United Kingdom
A "quad" is recognised by UK law as a vehicle with four wheels and a mass of less than . A quad cycle to be used on a public road in the UK must be taxed, insured and registered, and the driver must have a category B (car) or B1 (motor vehicles with 4 wheels up to 400 kg unladen or 550 kg if designed for carrying goods) licence.
In the United Kingdom, the safety issues of cars classed as quad cycles are illustrated by the case of the G-Wiz (REVAi). The electric microcar was given a Euro NCAP specification test, and the results showed that the vehicle's occupants would suffer "serious or life-threatening" injuries in a crash.
The UK Department for Transport concluded that there were serious safety concerns when the REVA was crashed at .
Australia
After consultation with stakeholders including farmers and quad cycle manufacturers, Australia's Heads of Workplace Safety Authorities (HWSA) in 2011 released a strategy intended to reduce the number of deaths and serious injuries associated with quad-bike use. The development of the report was closely followed by The Weekly Times newspaper and ABC television which reviewed the issue through its 7.30 program. Apart from encouraging of standard safety measures such as helmet-wearing, the strategy also recommend development of a national training curriculum, point of sale material for purchasers and, controversially, a recommendation that owners consider fitting an after-market anti-crush device which may offer added protection in the event of a roll-over.
When the report was released the only model of anti-crush protection on the market was the Australian-made "Quad bar", which was vigorously opposed by the industry through media activity and a poster campaign at regional events for farmers which are often used to showcase new products. The industry argued that the device had not been properly tested and that past studies of tractor-style ROPS such as a full-frame 'cage' around the operator were not only ineffective, but could add to the risk to injury or death.
In February 2012, the Melbourne-based Institute for Safety, Compensation and Recovery Research (ISCRR) published a paper which criticised the research claims of the manufacturers in relation to crush protection devices. The paper reviewed research in a number of countries since 1993 in relation to rollover protection and found that the industry's opposition to rollover protection could not be supported because of limitations in past research. It recommended further research on the topic and the development of research tools based on the use of ATV/quadbikes in Australian conditions.
Canada
In most provinces, ATV users are required to register their vehicle and have a designated off-highway license plate. Some provinces, like Nova Scotia, recently moved to allow ATVs limited road shoulder usage for the purpose of travelling from one trailhead to another.
Germany
In Germany the legal situation is very unrestrictive, but complex.
Street legality and registration
Almost, if not any manufacturer ATV registered at the KBA (Kraftfahrtbundesamt) can be registered for road use in Germany. Vehicle-taxes, insurance and a number plate as well as a MOT (TÜV) are required.
Quads can be accredited in two different ways in Germany. Usually they are taxed and insured as a regular automobile, which results in the tax being calculated by emissions and displacement in 100ccm steps. ATVs registered as an automobile have to be restricted to a power output of 20 hp/15 kW and are allowed to be driven with a passenger, provided a passenger seat is registered in the vehicle papers.
The quad must have at least one rear mirror on the left side, minimum 10x5cm. Right side mirror is optional.
The vehicle must have a high-/low-beam headlight, brake light, indicators, a number plate mount on front and back, and a horn.
ATVs under do not need a reverse gear. Over empty weight, a reverse gear and reverse light are required.
The maximum engine noise restrictions depend on date of first registration and engine displacement.
On the other hand, ATVs can be registered "as agricultural and silvicultural" (LoF/Land- oder Forstwirtschaftlich) giving the owner some benefits: The quad can be driven with power outputs more than 20 hp and tax is much cheaper, being calculated by empty weight. The insurance cost is much lower than for other agricultural vehicles and ATVs,.
However, there are some restrictions and requirements for registering ATVS as an agricultural vehicle:
the ATV may never be driven with a passenger, even if a passenger seat is available.
in addition to the street registration requirements, it needs:
additional hazard flashers
a rear fog light
a minimum of 2 headlights
a trailer coupling including electric kit for trailer lighting.
a reverse gear, even under 400 kg empty weight
Customization
Custom builds and engine replacements are possible to get street legal, by undergoing a single-acceptance procedure from the MOT(TÜV). This results in some custom quads popularly sporting 4-cycle motorcycle engines street legal. A common example are Yamaha Raptor 700 Conversions to a Yamaha 1000 cc engine from the early Yamaha Fazer and R1.
Driving license
ATVS are mostly treated as a regular automobile in Germany, which means no special-vehicle or motorcycle licence is needed. The regular driving license class B (multiple track motorised vehicles up to 3.5 tons) is sufficient even for LoF registered vehicles. Consequently, until 2013, quads could be driven only by people at least 18 years old with a drivers license. People under 18 may have a 50 cc or 125 cc bike license, but this does not allow them to drive quad bikes.
In 2013 the class AM licence was introduced, allowing 16-year-olds to drive microcars that do not exceed a speed of 50 km/h (such as the infamous Ellenator); this permits the use of an ATV limited to 50 cc with a top speed not exceeding 45 km/h
Special restrictions
As quad bikes were treated as automobiles, wearing a helmet was not required until January 2006, when helmets became required for ATVs, three-wheelers, trikes, etc. No additional protective gear is required.
Officially, driving a quad requires the owner to always carry a hazard triangle and a first-aid kit, and a reflective vest if the quad is registered as an agricultural vehicle. Due to the lack of storage room, police usually don't check the back of the vehicle, but if they lack the required equipment, they may be prosecuted.
Environmental issues
Emissions
ATVs accounted for 58% of the SI (spark ignited) recreational vehicles in the US in the year 2000. That year, recreational SI vehicles produced 0.16% of NOx, 8% of HC, 5% of CO and 0.8% of PM emissions for all vehicles, both highway and nonroad. As a point of comparison, the nonroad SI < 19 kW (~25 hp) category (small spark ignition engines such as lawnmowers) comprised 20% of HC and 23% of CO total emissions. While recreational SI vehicles produce an aggregate of <4% of all HC emissions in the US, based on the relatively small population of ATVs (<1.2M) and small annual usage (<350 hrs), EPA emission regulations now include such engines, starting with the model year 2006. Engines meeting these standards now produce only 3% of the HC emissions that previously unregulated engines did.
Terrain damage
While the deep treads on some ATV tires are effective for navigating rocky, muddy and root covered terrain, these treads are also capable of digging channels that may drain bogs, increase sedimentation in streams at crossings and damage groomed snowmobile trails. Proper trail construction techniques can mitigate these effects.
In some countries where fencing is not common, such as the US, Canada and Australia, some ATV riders knowingly cross privately owned property in rural areas and travel over public or private properties, although only permitted on trails. Subsequently, environmentalists criticize ATV riding as a sport for excessive use in areas which biologists consider to be sensitive, especially wetlands and sand dunes and in much of inland Australia.
Because both scientific studies and U.S. National Forest Service personnel have identified unregulated Off-Road Vehicles (ORVs) as the source of major detrimental impacts on national forests, the U.S. Forest Service is currently engaged in the Travel Management Process, wherein individual forests are restricting all off-road motorized travel to approved trails and roads. This is in contrast to its previously allowed, unregulated cross-country travel across all national forest lands, except for specifically designated wilderness areas. Although ORVs had been identified 30 years ago as a threat to wild ecosystems by the Forest Service, only after pressure by an unlikely alliance of environmentalists, private landowners, hunters, ranchers, fishermen, quiet recreationists and forest rangers themselves (who identified ORVs as a "significant law enforcement problem" in national forests). has action been taken.
Other uses
ATVs using tracks instead of wheels are used at France's Cap Prudhomme in Antarctica.
ATVs are also used in agriculture to bridge the advantages of trucks and tractors.
They are used in a variety of industries for their maneuverability and off-roading ability. These include:
Border control
construction
emergency medical services
land management
law enforcement
military
mineral exploration
oil exploration
pipeline transport
search and rescue
forestry
surveying
wild land fire control
Sport competition
Sport models are built with performance, rather than utility, in mind. To be successful at fast trail riding, an ATV must have light weight, high power, good suspension and a low center of gravity. These machines can be modified for such racing disciplines as motocross, woods racing (also known as cross country), desert racing (also known as Hare Scrambles), hill climbing, ice racing, speedway, Tourist Trophy (TT), flat track, drag racing and others.
Throughout the United States and the United Kingdom there are many quad racing clubs with enduro and quadcross sections. GNCC Racing began around 1980 and includes hare scramble and enduro type races. To date, events are mainly held in the eastern part of the United States. GNCC racing features many types of obstacles such as, hill climbing, creek and log crossings, dirt roads and wooded trails.
ATV National Motocross Championship was formed around 1985. ATVMX events are hosted at premiere motocross racetracks throughout the United States. ATVMX consists of several groups, including the Pro (AMA Pro) and Amateur (ATVA) series. Friday involves amateur practicing and racing on Saturday and Sunday. Saturday also involves racing for the Pro Am Women and Pro Am Unlimited classes. Sunday involves racing for the Pro and Pro Am production ATVs, but are scored separately. On average weekend over 500 racers will compete.
The FIM organizes the Quadcross of Nations at the end of the year. The competition involves teams of three riders representing their nations. There are three motos with two riders of each nation competing per moto. The location of the event changes from year to year.
Championship Mud Racing/CMR saw its infancy in 2006 as leaders of the ATV industry recognized a need for uniformity of classes and rules of various local mud bog events. Providing standardized rules created the need for a governing body that both racers and event promoters could turn to and CMR was born. Once unified, a true points series was established and lead to a national championship for what was once nothing more than a hobby for most. In 2007 the finalized board of directors was established and the first races were held in 2008. Currently, the CMR schedule includes eight competition dates spanning from March to November. Points are awarded throughout the season in several different competition classes of ATV and SxS Mud Racing. The 2008 year included Mud Bog and Mudda-Cross competitions, but the 2009 and future seasons will only have Mudda-Cross competitions. Classes range from 0–499 cc to a Super-Modified class which will allow any size ATV in competition. The ultimate goal of The CMR is "to see the growth of ATV Mud Racing as a competitive sport and give competitors a pedestal upon which they can receive the recognition from national media and industry sponsors that they have long deserved."
In 2005 the FIM Cross-Country Rallies World Championship started with a Quad Championship and the Dakar Rally added the Quad category in 2008. Because the 2008 Dakar Rally was cancelled, the 2009 Dakar Rally was the first Dakar Rally with Quads.
Amateur and professional three-wheeler racing across the United States has also spiked in popularity once again, at levels not seen since the factory teams raced in the 1980s. Part of the appeal is the cheapness of parts, and how easy it is to get into. Races are held at various local and large venues, particularly in Ohio, New York, Pennsylvania, Arizona, Michigan and California. Payouts are sometimes awarded to winners.
Each year in June, the world's biggest three-wheeler gathering is held at Haspin Acres, in Laurel, Indiana, for the Trikefest event. Over the course of 3 days complete with camping, hundreds of people gather for the event which features competitive racing such as MX style racing, drag racing, mud racing, hill climbs and other events. For those who wish not to compete, there are also many trails a person can ride. as many as 100 or more three-wheelers show up each year, some built and restored to be raffled off, others brought to ride.
The fastest speed recorded on a quad cycle, or ATV given a flying start, is 315.74 km/h (196.19 mph), by Terry Wilmeth (USA), at the Madras Airport in Madras, Oregon, USA, on 15 June 2008.
| Technology | Motorized road transport | null |
338407 | https://en.wikipedia.org/wiki/Longline%20fishing | Longline fishing | Longline fishing, or longlining, is a commercial fishing angling technique that uses a long main line with baited hooks attached at intervals via short branch lines called snoods or gangions. A snood is attached to the main line using a clip or swivel, with the hook at the other end. Longlines are classified mainly by where they are placed in the water column. This can be at the surface or at the bottom. Lines can also be set by means of an anchor, or left to drift. Hundreds or even thousands of baited hooks can hang from a single line. This can lead to many deaths of different marine species (see bycatch). Longliners – fishing vessels rigged for longlining – commonly target swordfish, tuna, halibut, sablefish and many other species.
In some unstable fisheries, such as the Patagonian toothfish, fishermen may be limited to as few as 25 hooks per line. In contrast, commercial longliners in certain robust fisheries of the Bering Sea and North Pacific generally run over 2,500 hand-baited hooks on a single series of connected lines many miles in length.
Longlines can be set to hang near the surface (pelagic longline) to catch fish such as tuna and swordfish or along the sea floor (demersal longline) for groundfish such as halibut or cod. Longliners fishing for sablefish, also referred to as black cod, occasionally set gear on the sea floor at depths exceeding using relatively simple equipment. Longlines with traps attached rather than hooks can be used for crab fishing in deep waters.
Longline fishing is prone to the incidental catching and killing of dolphins, seabirds, sea turtles, and sharks, but less so than deep sea trawling.
In Hawaii, where Japanese immigrants introduced longlining in 1917, longline fishing was known as flagline fishing because of the use of flags to mark floats from which hooks were suspended. The term "flagline fishing" persisted until local fishing vessels began to use modern monofilament mainline, line setters, and large, hydraulically powered reels, when the term "longline fishing" was adopted.
Incidental catch
Longline fishing is controversial because of bycatch, fish caught while seeking another species or immature juveniles of the target species. This can cause many issues, such as the killing of many other marine animals while seeking certain commercial fish. Seabirds can be particularly vulnerable during the setting of the line.
Methods to mitigate incidental mortality have succeeded in some fisheries. Mitigation techniques include the use of weights to ensure the lines sink quickly, the deployment of streamer lines to scare away birds, lasers, setting lines only at night in low light (to avoid attracting birds), limiting fishing seasons to the southern winter (when most seabirds are not feeding young), and not discharging offal while setting lines.
The Hawaii-based longline fishery for swordfish was closed in 2000 over concerns of excessive sea turtle by-catch, particularly loggerhead sea turtles and leatherback turtles. Changes to the management rules allowed the fishery to reopen in 2004. Gear modification, particularly a change to large circle-hooks and mackerel-type baits, eliminated much of the sea turtle by-catch associated with the fishing technique. It has been claimed that one consequence of the closure was that 70 Hawaii-based vessels were replaced by 1,500–1,700 longline vessels from various Asian nations, but this is not based on any reliable data . Due to poor and often non-existent catch documentation by these vessels, the number of sea turtles and albatross caught by these vessels between 2000 and 2004 will never be known . Hawaii longline fishing for swordfish closed again on 17 March 2006, when the by-catch limit of 17 loggerhead turtles was reached. In 2010 the by-catch limit for loggerhead turtles was raised, but was restored to the former limit as a result of litigation. The Hawaii-based longline fisheries for tuna and swordfish are managed under sets of slightly different rules. The tuna fishery is one of the best managed fisheries in the world, according to the UN Code of Responsible Fishing, but has been criticized by others, as being responsible for continuing by-catch of false killer whales, seabirds, and other nontargeted wildlife, as well as placing pressure on depleted bigeye tuna stocks.
Commercial longline fishing is also one of the main threats to albatrosses, posing a particularly serious threat to their survival. Of the 22 albatross species recognized by the IUCN Red List, 15 are threatened with extinction. The IUCN lists two species as Critically Endangered (Tristan albatross and waved albatross), seven species as Endangered, and six as Vulnerable. Albatrosses and other seabirds which readily feed on offal are attracted to the set bait, become hooked on the lines and drown. An estimated 8,000 albatross per year are killed in this way. These activities, however, are not randomly spread across the vast oceans, but rather are highly spatially concentrated. Therefore, the bird conservation lobby should work closely with regional fisheries management organizations to devise and implement targeted interventions aimed at reducing potential illegal longline fishing, which, in turn, will likely have positive effects on albatrosses. A simple device which can be fitted onto longlines, known as Hookpod, has been proposed for mitigation of seabird bycatch; Hookpod was rolled out to a total of 15 commercial fishing vessels in New Zealand after a change in regulations in January 2020, with a result of zero seabird bycatch in the first 6 months.
Microplastics
Oceanic microplastics pollution is largely caused by plastic-made fishing gear like longline fishing equipment or drift nets, that are worn down by use, lost or thrown away.
Safety
In the US, a study found that the risk for non-fatal injuries was 35 per 1,000 full-time equivalent employees, about three times higher than average U.S. worker. (This is compared to 43 per 1,000 in the trawler fleet).
Historic images
| Technology | Hunting and fishing | null |
338454 | https://en.wikipedia.org/wiki/Arboretum | Arboretum | An arboretum (: arboreta) is a botanical collection composed exclusively of trees and shrubs of a variety of species. Originally mostly created as a section in a larger garden or park for specimens of mostly non-local species, many modern arboreta are in botanical gardens as living collections of woody plants and are intended at least in part for scientific study.
In Latin, an arboretum is a place planted with trees, not necessarily in this specific sense, and "arboretum" as an English word is first recorded used by John Claudius Loudon in 1833 in The Gardener's Magazine, but the concept was already long-established by then.
An arboretum specializing in growing conifers is known as a pinetum. Other specialist arboreta include saliceta (willows), populeta (poplar), and querceta (oaks). Related collections include a fruticetum, from the Latin frutex, meaning shrub, much more often a shrubbery, and a viticetum (from the Latin vitis, meaning vine, referring in particular to a grape vine). A palm house is a large greenhouse for palms and other tender trees.
History
Egyptian pharaohs planted exotic trees and cared for them; they brought ebony wood from the Sudan, and pine and cedar from Syria. Hatshepsut's expedition to Punt returned bearing thirty-one live frankincense trees, the roots of which were carefully kept in baskets for the duration of the voyage; this was the first recorded attempt to transplant foreign trees. It is reported that Hatshepsut had these trees planted in the courts of her Deir el Bahri mortuary temple complex.
Marco Polo describes how Kublai Khan collected specimens of evergreen trees that he admired from around the Mongol Empire in the late 13th century, and had them brought by elephant to his winter capital at Khanbaliq (modern Beijing), where they were planted on a large artificial mound, "a hundred paces in height and over a mile in cicumference", known as the "Green Mound", with a palace or pavilion at the top. The ground of the mound was also covered in pieces of green stone.
In an arboretum a wide variety of trees and shrubs are cultivated. Typically the individual trees are labelled for identification. The trees may also be organised in a way to aid their study or growth.
Many tree collections have been claimed as the first modern arboretum, with the term applied retrospectively as it probably did not come into use even orally until the later eighteenth century, or later. Probably the most important early proponent of the arboretum in the English-speaking transatlantic world was the prolific landscape gardener and writer, John Claudius Loudon (1783–1843) who undertook many gardening commissions and published the Gardener's Magazine, Encyclopaedia of Gardening and other major works. Loudon's Arboretum et Fruticetum Britannicum, 8 vols., (1838) is probably the most significant work on the subject in British history and included an account of all trees and shrubs that were hardy in the British climate, an international history of arboriculture, an assessment of the cultural, economic and industrial value of trees and four volumes of plates.
Loudon urged that a national arboretum be created and called for arboreta and other systematic collections to be established in public parks, private gardens, country estates, and other places. He regarded the Derby Arboretum (1840) as the most important landscape-gardening commission of the latter part of his career because it demonstrated the benefits of a public arboretum (for more details see below). Commenting on the Loddiges family's famous Hackney Botanic Garden arboretum, begun in 1816, which was a commercial nursery that subsequently opened free to the public, for educational benefit, every Sunday, Loudon wrote: "The arboretum looks better this season than it has ever done since it was planted... The more lofty trees suffered from the late high winds, but not materially. We walked round the two outer spirals of this coil of trees and shrubs; viz. from Acer to Quercus. There is no garden scene about London so interesting". A plan of the Loddiges' arboretum was included in The Encyclopaedia of Gardening, 1834 edition. Leaves from Loddiges' arboretum and in some instances entire trees, were studiously drawn to illustrate Loudon's encyclopaedic book Arboretum et Fruticetum Britannicum which also incorporated drawings from other early botanic gardens and parklands throughout the United Kingdom.
One example of an early European tree collection is the Trsteno Arboretum, near Dubrovnik in Croatia. The date of its founding is unknown, but it was already in existence by 1492, when a span aqueduct to irrigate the arboretum was constructed; this aqueduct is still in use. The garden was created by the prominent local Gučetić/Gozze family. It suffered two major disasters in the 1990s but its two unique and ancient Oriental Planes remained standing.
Later examples
Asia – India
Udhagamandalam (Ooty) Arboretum, The Nilgiris, India
The arboretum at Ooty was established in 1992 with an aim of conserving native and indigenous trees, and is maintained by the Department of Horticulture with Hill Area Development Programme funds. It occupies near Ooty Lake. The site is a micro watershed area and a natural habitat for both indigenous and migratory birds; prior to the creation of the arboretum it had been neglected, and the feeder line bringing water to the lake was contaminated with urban waste and agricultural chemicals. From 2005 to 2006 the Hill Area Development Programme provided funds of Rs 1,250,000 for the construction of permanent fencing, a footpath, and other infrastructure facilities.
Australia and New Zealand
Eastwoodhill Arboretum, Gisborne, New Zealand
Probably the largest collection of Northern Hemisphere trees in the Southern Hemisphere can be found at Eastwoodhill Arboretum, Ngatapa, Gisborne, New Zealand.
The arboretum is the realization of the dream of William Douglas Cook (1884–1967), who started planting trees on his farm shortly after the First World War. The arboretum is now the National Arboretum of New Zealand, and holds some 4,000 different trees, shrubs and climbers.
Taitua Arboretum, Hamilton, New Zealand
This arboretum was offered to Hamilton residents in 1997. Trees and shrubs were planted there from 1973 by John and Bunny Mortimer to provide shelter and shade for local animals. The arboretum is a popular picnic spot and is enjoyed by about 60,000 people every year. The twenty-two hectare arboretum contains 1500 species of trees and much birdlife.
RJ Hamer Arboretum, Victoria, Australia
Parks Victoria RJ Hamer Arboretum, Visitors to the RJ Hamer Arboretum can take a quiet, peaceful stroll along the many walking tracks and roads providing access to the 126 hectares of breathtaking scenery and tranquil beauty that the Arboretum has to offer. The RJ Hamer Arboretum land is a small part of the original Dandenong and Woori Yallock State forest, proclaimed over 110 years ago. The RJ Hamer Arboretum is the first known occasion in which a forest style Arboretum was completely established by planting. A basic planting design was completed in 1970 and planting was carried out for the next 15 years.
The Tasmanian Arboretum, Devonport, Tasmania
The Tasmanian Arboretum was established in 1984 on the Don River in Devonport, Tasmania, Australia. The main site is 58 ha. There are over 2,500 plants in the geographic and thematic collections along with riparian revegetation. Maintenance of the collections is done by volunteers.
The National Arboretum, Canberra, Australian Capital Territory
National Arboretum Canberra is being developed on a 250-hectare site in the Greenhills Forest areas west of the Tuggeranong Parkway and Lake Burley Griffin, Canberra, Australia. It includes an existing stand of 5000 Himalayan Cedars and the 80-year-old Cork Oak plantation which were damaged by the 2001 and 2003 Canberra bushfires. It features different types of threatened and symbolic trees from around Australia and the world, including the world's largest planting of the Wollemi pine. There will eventually be 100 forests and 100 gardens with almost 80 forests planted already.
Lindsay Pryor National Arboretum, Canberra, Australian Capital Territory
Located at Yarramundi Reach on the shores of Lake Burley Griffin, the Lindsay Pryor National Arboretum is a 30-hectare site originally planted by Professor Pryor between 1954 and 1957 to improve the view from Government House.
Europe
Abney Park Arboretum, London, England
Shortly before the Derby Arboretum opened in 1840, another arboretum was opened for free public access at Abney Park Cemetery in Stoke Newington near London, modelled partly on Mount Auburn Cemetery near Boston and designed by Loddiges nursery. It was laid out with 2,500 trees and shrubs, all labelled and arranged in an unusual alphabetical format from A for Acer (maple trees) to Z for Zanthoxylum (American toothache trees). Until Kew was enlarged and opened to the public, this remained the largest arboretum in Europe. It never achieved the recognition of the better financed early nineteenth century botanical gardens and arboreta that could afford members' events, indoor facilities and curatorial staff for those who paid accordingly. However, unlike these, and even unlike the 'public' arboretum at Derby, the Abney Park arboretum always offered public access free of charge, though sometimes, by pre-arrangement; a Viewing Order was needed so as not to interfere with funeral events.
Arboretum Norr, Umeå, Sweden
An arboretum containing mostly plants from Scandinavian countries.
Atatürk Arboretum, Istanbul, Turkey
Situated on the European side of Istanbul in the northern Sarıyer district, Atatürk Arboretum covers 296 ha (730 acres) adjacent to the Belgrad Forest. The arboretum also includes a rare plant nursery operated by Istanbul University Forestry Department.
Bank Hall Arboretum, Lancashire, England
A small arboretum at Bank Hall Gardens, Bretherton in Lancashire, contains a yew thought to be at least 550 years old, the oldest in Lancashire. George Anthony Legh Keck had the arboretum planted in the gardens which were abandoned from the 1970s until 1995 when Bank Hall Action Group cleared the grounds. It contains one of two known fallen Sequoia sempervirens in the UK, Wellingtonia, dawn redwood (Metasequoia glyptostroboides), Atlas cedar (Cedrus atlantica), western hemlock (Tsuga heterophylla), Chinese swamp cypress and yew. Recent additions by the Action Group include paperbark maple (Acer griseum) (2004), cedar of Lebanon (Cedrus libani) (2005), further yew and pine trees (2006–2009) and a Ginkgo biloba (2011) for the Royal Wedding of the Duke and Duchess of Cambridge. It also has many specimens of snowdrop, daffodil and bluebell.
Batsford Arboretum, Gloucestershire, England
Situated one and a quarter miles west of Moreton-in-Marsh, Gloucestershire, Batsford Arboretum is tucked away on a south facing escarpment of the famous Cotswold Hills.
Bedgebury National Pinetum, Kent, England
Bedgebury National Pinetum, near Goudhurst, Kent is one of the world's most complete collections of conifers. The 300 acre Pinetum contains over 12,000 trees and shrubs (including 1,800 different species) from across five continents, many of them rare and endangered.
Bluebell Arboretum, Derbyshire, England
Located in South Derbyshire near Ashby-de-la-Zouch, with planting begun in 1992, this 9 acre Royal Horticultural Society recommended arboretum contains a large variety of rare but hardy plants and trees, including amongst many species a grove of Giant Redwoods and a substantial Liquidambar collection. The arboretum is extensively labelled with educational notes and information for many of the plants.
Bodenham Arboretum, Worcestershire, England
Wolverley, Kiddermister, Bodenham Arboretum has contains mature woodland, specimen trees and shrubs. With a collection of over 3,000 species of trees and shrubs it includes a number of collections such as Acers, North American Oaks and Alders. There are many species of insects and resident and migrating birds with the aquatic and wet margins to the pools provide a breeding ground for many water-fowl and frogs.
Derby Arboretum, Derbyshire, England
The Derby Arboretum opened on 16 September 1840. Commissioned and presented by Joseph Strutt (1766–1844) a wealthy industrialist and major local benefactor, the Derby Arboretum was designed by John Claudius Loudon and had a major impact upon the development of urban parks. It was one of the first Victorian public parks and also unusual for the quality of its collection of trees and shrubs. Although established on only quite a small site of 14 acres, the park featured a labelled collection of over 1000 trees and shrubs and was landscaped with mounds, sinuous paths, urns, benches, statues, lodges and other features. Managed by a committee until it was acquired by the Derby Corporation during the 1880s, the Derby Arboretum was only open free to the public for two days of the week for its first four decades, the remaining days being reserved for subscribers and their families and guests. Very popular anniversary festivals were staged annually which drew crowds of tens of thousands and helped to fund the upkeep of the park. The Derby Arboretum is also significant because it was the planted counterpart to Loudon's Arboretum et Fruticetum Britannicum (1838) which detailed all the hardy and semi-hardy trees and shrubs of the British Isles. Within the park, the trees and shrubs were laid out according to the natural system and labelled so that visitors could identify them using the guide.
The Derby park had a major impact on park design elsewhere including Europe, the British colonies and North America and other public parks and arboreta were established modelled on Loudon's creation and using his ideas. In 1859 for example, it was visited by Frederick Law Olmsted on his European tour of parks, and it had an influence on the planting in Central Park, New York. Industrial pollution killed most of the original plantings by the 1880s (although a few examples remain), but it has been renovated and replanted with National Lottery Heritage funding closer to Loudon's original layout and with a new cafe and visitor centre.
Dropmore Park, Buckinghamshire, England
Dropmore Park, Buckinghamshire (Bucks) England, was created in the 1790s for future prime minister Lord Grenville. On his first day in occupation, he planted two cedar trees. At least another 2,500 trees were planted. By the time Grenville died in 1834, his pinetum contained the biggest collection of conifer species in Britain. Part of the post-millennium restoration is to use what survives as the basis for a collection of some 200 species.
Dømmesmoen Arboret, Grimstad, Aust-Agder, Norway
Dømmesmoen Arboret is a arboretum in Grimstad municipality, Aust-Agder county, Norway. In the Dømmesmoen forest, where the arboret is planned in harmony with nature, 22 different ecosystems have been defined. The trees and plants have been planted along the tracks so that the visitors can experience and learn about them in the various ecosystems. Information about the various ecosystems are found along the tracks in the forest and park area. Through the years, approximately 700 different species of trees and plants have been planted in the Dømmesmoen area.
The Dømmesmoen area, where the arboret is situated, has a fascinating history. Excavations have found traces of settlements that can be dated to around year 0. There are 50-60 burial mounds from pre Viking area at Dømmesmoen, among the densest burial mound areas found in Norway. The most famous attractions at Dømmesmoen among locals are a 400-500 year old hollow oak, and a wooden tower overlooking the town of Grimstad. 2 kilometres east of Dømmesmoen, at Fjære, Fjære church is situated. The stone church was built around year 1150, and has significant historical value dating back to the Viking area.
Golden Grove / Gelli Aur arboretum, Carmarthenshire, Wales
Golden Grove / Gelli Aur Arboretum is a collection of mature trees and shrubs that spreads over 10 acres of the Golden Grove / Gelli Aur Country Park.
Commissioned by John Campbell, 2nd Earl Cawdor, the majority of the planting took place in 1865. It is an unusual, fine arboretum and celebrated in Victorian and Edwardian times as the finest in the UK. It is built in an arc as though embracing the house, fanning out from an ancient oak which stands at the top of the terraced lawn. The natural slope enhancing the view from the house. Many of the trees are champions, they love the damp, temperate climate. Several are on the Monumental Trees website. The Great Western Red Cedar is particularly spectacular, people come from all over the world to see it.
Herbaceous plants and bulbs were planted as part of the carpet, and American and Asiatic shrubs were planted to provide colour and fragrance. The Rhododendrons are an extremely fine single variety and present a spectacular display of colour in May and June. In the Summer the arboretum is bordered by white foxgloves, interspersed with shades of pink.
The arboretum is much loved by locals but it is an irony that the fame of its youth has been largely forgotten, unappreciated, in its magnificent maturity.
Greifswald Botanic Garden and Arboretum, Greifswald, Germany
The Greifswald Botanic Garden and Arboretum (total area 9 hectares, German: Botanischer Garten und Arboretum der Universität Greifswald), was founded in 1763. It is one of the oldest botanical gardens in Germany, and one of the oldest scientific gardens in the world. It is associated with the University of Greifswald in Greifswald, Germany.
Jubilee Arboretum, Surrey, England
This is located at RHS Garden, Wisley, Surrey, England.
Kew Gardens, London, England
The Kew Gardens botanical gardens are set within an arboretum covering the majority of the site.
Kilmun Arboretum, Argyll and Bute, Scotland
Established in the 1930s, this Forestry Commission arboretum is at Kilmun, Argyll and Bute, Scotland.
Kórnik Arboretum, Poland
Established in the early 19th century around the historical Kórnik Castle by its owner, Count Tytus Działyński, later enriched by his heirs: his son Jan Kanty Działyński and Władysław Zamoyski. It is the largest and oldest arboretum in Poland. It covers over 40 hectares and is famous for rich collections of rhododendrons, azaleas, conifers, lilacc, and other woody species from all over the world. The Institute of Dendrology in Kórnik is located within the arboretum.
Lincoln Arboretum, Lincoln, England
Affectionately referred to as "The Arb" or "The Arbo", Lincoln Arboretum is to the east of the city and retains its line of sight up the hill to the nearby Lincoln Cathedral. This was one of the original design features. It was laid out between 1870 and 1872 by Edward Milner and has been renovated since 2002.
Arborétum Mlyňany, Slovakia
Arborétum Mlyňany is located in the area of two neighboring villages Vieska nad Žitavou and Tesárske Mlyňany near Zlaté Moravce, Slovakia. It was established in 1892 by Hungarian Count István Ambrózy-Migazzi. Today, it is governed by the Slovak Academy of Sciences. Within its area, the arboretum features more than 2,300 woody plant species, being one of the largest collections in Central Europe.
Nottingham Arboretum, Nottinghamshire, England
The Nottingham Arboretum (1852) was designed by Samuel Curtis as the centrepiece of a major scheme enclosing the common lands around the town. It included various public walks, parks, cemeteries and other green spaces. The Nottingham Arboretum was modelled on Loudon's Derby Arboretum and also originally had a systematic labelled collection of trees and shrubs. Advantage was taken on the hilly site to produce an attractive landscaped park with a small lake, lodges, benches and other features and some of the nineteenth-century trees still survive.
Affectionately referred to as "The Arb", the Nottingham Arboretum it also gives its name to the residential area – in which it lies – of the City of Nottingham, England.
Arboretum de Pézanin, Dompierre-les-Ormes, France
Located in Dompierre-les-Ormes, in South Burgundy, near Mâcon, the Arboretum de Pézanin was established in 1903 by French botaniquer Joseph-Marie-Philippe Lévêque de Vilmorin (1872–1917). Acquired by the state in 1935, it is now one of the richest collection in France, visited every year by thousands of tourists.
Průhonice Park, near Prague, Czech Republic
Průhonice Park in the Czech Republic is a National Heritage Site, and since 2010 has been included within the boundaries of the UNESCO World Heritage Site of Prague. The arboretum was founded in 1885 by Count Arnošt Emanuel Silva-Tarouca. 2,360 taxa (species and cultivars), of which 310 were evergreen and 2,050 deciduous taxa were planted in the park between 1885 and 1927. Today it contains over 1,200 taxa (species and cultivars) of broad-leaved trees, 300 of coniferous trees, and about 600 of perennial herbs.
Arboretum Wespelaar, Wespelaar, Belgium
Arboretum Wespelaar, in Wespelaar, Belgium, brings together trees and shrubs from the whole world. The arboretum focuses on: Acer, Magnolia, Rhododendron and Stewartia.
Westonbirt, England
The Westonbirt Arboretum, near Tetbury, Gloucestershire, England, was founded around 1828 as the private tree collection of Captain Robert Holford at the Holford estate. Holford planted in open fields and laid out rides before he rebuilt the house. Planting at Westonbirt was continued by his son, George Holford. Eventually the estate passed to the government in lieu of death duties and was opened to the public. Also the word "arbortorium" was changed to arboretum in the early 1950s. The arboretum comprises some 18,000 trees and shrubs, over an area of approximately . It has of marked paths which also provide access to a wide variety of rare plants.
St Roche's Arboretum, West Dean, England
The St Roche's Arboretum at West Dean College is a circuit walk long that encompasses a collection of specimen trees and shrubs. Edward James made a significant contribution to its planting, specialising in exotic, pendulous, contorted and twisted trees. It is also his final resting place – he is buried beneath a massive slab of Cumbrian slate inscribed by local artist John Skelton with the simple words "Edward James, Poet 1907 – 1984".
Arboretum Sequoiafarm Kaldenkirchen, Nettetal, Germany
The Sequoiafarm Kaldenkirchen is a German arboretum that has been used as a biological institute for many years. It is situated close to the Dutch border in North Rhine-Westphalia and has 500 varieties of trees and an interesting ground flora. The founder Illa and Ernst J. Martin wanted to find out if the giant sequoia, which had existed in Germany before the ice age, could be introduced to German forestry.
Sochi Arboretum
Sochi Arboretum is a monument of landscape architecture located in the Khosta district of the city of Sochi, Krasnodar Krai, in Russia. It includes 76 species of pine, 80 species of oak, and 24 species of palm.
Sofiyivsky Park, Ukraine
Sofiyivsky Park is an arboretum and a scientific-researching institute of the National Academy of Sciences of Ukraine. The park is located in the northern part of the Uman city, Cherkasy Oblast (Central Ukraine), near the river Kamianka. Some areas of the park are reminiscent of an English garden. Today the park is a popular recreational spot, annually visited by 500,000 visitors.
Sofiyivka is a scenic landmark of world gardening design at the beginning of the 19th century. The park accounts for over 2,000 types of trees and brush (local and exotic) among which are taxodium (marsh cypress), Weymouth Pine, tulip tree, platanus, ginkgo, and many others.
Trompenburg Botanical Garden and Arboretum, The Netherlands
Arboretum Trompenburg is an arboretum with a botanical garden in Rotterdam, The Netherlands. It is situated next to the Erasmus University close to the center of the city of Rotterdam. The garden serves the scientific and botanical community, as well as serving as a recreational park where visitors can walk around or have a picnic on benches along the paths, or at the 'theehuis', a place where high-teas are served. It has an extensive collection of Rhodondendron, Quercus, Ilex, Cactai and succulents, Hosta and Fagus and Nothafagus.
Trsteno Arboretum
Trsteno Arboretum is located in Trsteno, Croatia. It was erected by the local noble family Gozze in the late 15th century, who requested ship captains to bring back seeds and plants from their travels. The exact start date for the arboretum is unknown, but it was already in existence by 1492, when a 15 m span aqueduct to irrigate the arboretum was constructed; this aqueduct is still in use.
Volcji Potok Arboretum, in Slovenia
Volčji Potok Arboretum is an 88-hectare arboretum, botanic garden and landscape park near Kamnik, Slovenia.
North America
University of Guelph Arboretum, Ontario, Canada
Arnold Arboretum, US
Harvard University's Arnold Arboretum in Jamaica Plain, Boston, Massachusetts, was established in 1872 on of land in the Jamaica Plain section of Boston and was guided for many years by Charles Sprague Sargent who was appointed the Arboretum's first director in 1873 and spent the following 54 years shaping the policies. By an arrangement with the city of Boston, the Arnold Arboretum became part of the "Emerald Necklace", the network of parks and parkways that Frederick Law Olmsted laid out for the Boston Parks Department between 1878 and 1892.
The Dominion Arboretum Ottawa, Canada
Ottawa's Dominion Arboretum is located at the Central Experimental Farm of Agriculture and Agri-Food Canada. Originally begun in 1889, the arboretum covers about 26 ha of rolling land between Prince of Wales Drive, Dow's Lake and the Rideau Canal. At a latitude of 45°, it can experience extremely hot and humid summers and extremely cold winters. It displays a wide range of well-established trees and shrubs with the intention of evaluating their hardiness, including 1,700 different species and varieties.
Core Arboretum, US
The Core Arboretum is a 91-acre (37 ha) arboretum owned by West Virginia University and located on Monongahela Boulevard in Morgantown, West Virginia, United States. It is open to the public daily without charge.
The arboretum's history began in 1948 when the university acquired its site. Professor Earl Lemley Core (1902–1984), chairman of the Biology Department, then convinced President Irvin Stewart to set the property aside for the study of biology and botany. In 1975 the arboretum was named in Core's honor.
The arboretum is managed by the WVU Department of Biology, and consists of mostly old-growth forest on steep hillside and Monongahela River flood plain. It includes densely wooded areas with of walking trails, as well as of lawn planted with specimen trees.
The arboretum has a variety of natural habitats in which several hundred species of native WV trees, shrubs, and herbaceous plants may be found. Some of the large trees are likely over 200 years old.
Edith J. Carrier Arboretum, US
The Edith J. Carrier Arboretum is located at James Madison University in Harrisonburg, Virginia, Groundbreaking took place in April 1985 under direction of Norlyn Bodkin, who is credited the first scientific botanical discovery along the Eastern Seaboard of Virginia since the 1940s: Trillium: Shenandoah Wake Robin, presently found at the arboretum. The only arboretum located on the campus of a Virginia state university, exhibits include an acidic sphagnum bog supporting northern species and insectivorous plants, the only shale barren with endemic species in an arboretum, rare endangered large-flowered azaleas, of mature Oak-Hickory Forrest including two identified century specimens, and a species on the U.S. Fish and Wildlife Threatened Species list found protected and propagated at the arboretum: Betula uber, Round-Leaf Birch.
Holden Arboretum, US
The Holden Arboretum, in Kirtland, Ohio, United States, is one of the largest arboreta and botanical gardens in the United States, with over , of which are devoted to collections and gardens. The arboretum is named for Albert Fairchild Holden, a mining engineer and executive, who had considered making Harvard University's Arnold Arboretum his beneficiary. However, his sister, Roberta Holden Bole, convinced him that Cleveland deserved its own arboretum. Thus Holden established an arboretum in memory of his deceased daughter, Elizabeth Davis.
Houston Arboretum and Nature Center, Houston, Texas, US
The Houston Arboretum and Nature Center is a 155-acre non-profit urban nature sanctuary that provides education about the natural environment to people of all ages. It plays a vital role in protecting native plants and animals in the heart of the city where development threatens their survival. The Houston Arboretum is a private nonprofit educational facility that operates city land.
Hoyt Arboretum, US
Located in Portland, Oregon, United States, the Hoyt Arboretum has over and close to 8,300 different species of plants.
Overland Park Arboretum & Botanical Gardens
Located at the southernmost tip of Kansas City (in southern Overland Park, Kansas) this arboretum was founded in 1997 and has a large prairie and forest. It has also added a new garden almost every year since it opened.
Humber Arboretum, Canada
Located in Toronto, Ontario, the Humber Arboretum includes 250 acres of ornamental gardens and diverse natural areas, including native Carolinian forests. First opened in 1977, the Humber Arboretum is a joint venture of the City of Toronto, Humber College, and Toronto and Region Conservation Authority. Its purpose is to establish and maintain quality plant collections, promote conservation and restoration practices, facilitate research and education, and provide a quality visitor experience.
The Centre for Urban Ecology, located in the Humber Arboretum, provides educational programming and children's camps centered on urban ecology. It also serves as a venue for sustainability-focused meetings, conferences, weddings, and events.
The Arboretum at Flagstaff, US
The Arboretum at Flagstaff, at above sea level, is a arboretum that is home to, and focuses on, 2,500 species of mostly drought-tolerant adapted and native plants representative of the high-desert Colorado Plateau.
Los Angeles County Arboretum and Botanic Garden
Originally called Los Angeles State & County Arboretum, it is located in Arcadia, California.
Louisiana State Arboretum, Louisiana, US
The 600-acre (240 ha) Louisiana State Arboretum is located on Louisiana Highway 3042, approximately 13 km (eight miles) north of Ville Platte, Louisiana, inside of Chicot State Park, USA, and bordering a branch of Lake Chicot. Established in 1961, it is the oldest state-supported arboretum in the United States. The arboretum contains over 150 species of plant life native to Louisiana, on a varied topography suitable for nearly all Louisiana vegetation except those of the prairies and coastal marshes. The arboretum is a mature Beech-Magnolia forest containing centuries-old giant beech, magnolia, oak, and ash trees, as well as ferns, hickories, maples, sycamores, and crane fly orchids. Wildlife includes white-tail deer, fox, opossum, raccoon, skunk, squirrel, wild turkey, and numerous other bird species.
Morton Arboretum, US
Located in Lisle, Illinois, the Morton Arboretum was founded in 1922 by Joy Morton, founder of the Morton Salt Company and son of Arbor Day originator Julius Sterling Morton. At the Arboretum is one of the largest in the world, and features several mature deciduous and coniferous forests, as well as collections of plant life from around the globe, in addition to ten lakes, several wetlands, and a restored prairie.
Peru State College Arboretum, Nebraska, US
Peru State College's "Campus of a Thousand Oaks," an arboretum campus, is in southeast Nebraska.
United States National Arboretum, US
In 1927, the United States National Arboretum was established in Washington, D.C., on of land; currently it receives over half a million annual visitors. Single-genus groupings include apples, azaleas, boxwoods, dogwoods, hollies, magnolias and maples. Other major garden features include collections of herbaceous and aquatic plants, the National Bonsai and Penjing Museum, the Asian Collections, the Conifer Collections, native plant collections, the National Herb Garden and the National Grove of State Trees. A unique feature of the U.S. National Arboretum is the National Capitol Columns, 23 Corinthian columns that were used in the United States Capitol from 1828 until 1958.
University of Wisconsin Arboretum, US
The University of Wisconsin–Madison Arboretum in Madison, Wisconsin is a study collection devoted to ecology rather than systematics. Founded in the 1930s, it was a Civilian Conservation Corps project which restored a body of land to its presettlement state. Portions of the Walt Disney nature documentary, "The Vanishing Prairie", were filmed there, notably the prairie fire, filmed during a controlled burn at the Arboretum.
Utah State University, Logan Campus, US
With over 30 of the largest species of trees in Utah and over 7,000 specimens, Utah State University's Logan campus (main campus) is recognized as an international arboretum through the ArbNet arboretum accreditation program.
Washington Park Arboretum, Washington, US
The Washington Park Arboretum at the University of Washington in Seattle, Washington, was established in 1934 as a public space that was agreed upon by the University of Washington and the City of Seattle. Seattle at the time had in its possession an over-1,200-acre (500+ ha) park known as Washington park located in the central portion of the city, and the university was given authority to design, construct, plant, and manage an arboretum and botanical garden in this park. It has been a popular destination of Seattlites ever since. In 2005, the Washington Park Arboretum, as well as the University of Washington's Center for Urban Horticulture, Elisabeth C. Miller Library, Otis Hyde Herbarium and Union Bay Natural Area, began operating under the umbrella of the University of Washington Botanic Gardens.
Boyce Thompson Arboretum, Superior, Arizona
Located in the historic copper mining town of Superior, Arizona, 55 miles east of Phoenix.
Viveros de Coyoacán, Mexico City, Mexico
Viveros de Coyoacán is a 38.9-hectare arboretum and park in the Coyoacán borough of Mexico City that was built in 1901 and opened to the public in the 1930s.
Scott Arboretum of Swarthmore College, Pennsylvania
The Scott Arboretum of Swarthmore College was established as the Arthur Hoyt Scott Horticultural Foundation in 1929, and has since grown to include the James R. Frorer Holly Collection, which contains over 350 types of holly, the Dean Bond Rose Garden, which contains over 200 types of roses, an extensive pinetum, and the woodland and walking trails of Crum Woods. The arboretum's tree peony collection "has historic depth in tree peonies from Japan and China as well as classic selections from European and American tree peony breeders. In 1940 the Scott Arboretum listed 280 cultivars." Partnering with the University of Michigan's Nichols Arboretum, the Scott Arboretum established "a multi-institution collaborative to conserve the range of peony species and cultivars that can grow in Canada and the United States."
Morris Arboretum of University of Pennsylvania, Pennsylvania
The Morris Arboretum of the University of Pennsylvania is the official arboretum of the Commonwealth of Pennsylvania. It is located in the Chestnut Hill neighborhood of Philadelphia, Pennsylvania. Built in 1889, the arboretum was opened to the public in 1933. The arboretum contains more than 13,000 labelled plants of over 2,500 types and covers 175 acres.
Cowling Arboretum, US
The Cowling Arboretum (also referred to as the Arb) consists of approximately of land adjacent to Carleton College. The Carleton Arboretum is located on a natural border between prairie and forest habitat. The arboretum serves as a Minnesota state game refuge. It was originally created in the 1920s. The arboretum is divided by Minnesota State Highway 19 into the Upper Arboretum (south of the highway) and the Lower Arboretum (north of the highway; "lower" because it contains the low-lying floodplain of the Cannon River).
| Technology | Buildings and infrastructure | null |
338574 | https://en.wikipedia.org/wiki/Dianthus | Dianthus | Dianthus ( ) is a genus of about 340 species of flowering plants in the family Caryophyllaceae, native mainly to Europe and Asia, with a few species in north Africa and in southern Africa, and one species (D. repens) in arctic North America. Common names include carnation (D. caryophyllus), pink (D. plumarius and related species) and sweet william (D. barbatus).
Description
The species are mostly herbaceous perennials, a few are annual or biennial, and some are low subshrubs with woody basal stems. The leaves are opposite, simple, mostly linear and often strongly glaucous grey green to blue green. The flowers have five petals, typically with a frilled or pinked margin, and are (in almost all species) pale to dark pink. One species, D. knappii, has yellow flowers with a purple centre. Some species, particularly the perennial pinks, are noted for their strong spicy fragrance.
Taxonomy
Species
Selected species include:
Hybrids include;
'Devon Xera' – Fire Star Dianthus
'John Prichard'
Etymology
The name Dianthus is from the Greek διόσανθος, a compound from the words Δῖος Dios ("of Zeus") and ἄνθος anthos ("flower"), and was cited by the Greek botanist Theophrastus. The colour pink may be named after the flower, coming from the frilled edge of the flowers: the verb "to pink" dates from the 14th century and means "to decorate with a perforated or punched pattern". As is also demonstrated by the name of "pinking shears", special scissors for cloth that create a zigzag or decorative edge that discourages fraying. Alternatively, "pink" may be derived from the Dutch "pinksteren" alluding to the season of flowering; "pinksteren" meaning "Pentecost" in Dutch. Thus the colour may be named after the flower, rather than the flower after the colour.
Ecology
Dianthus species are used as food plants by the larvae of some Lepidoptera species including cabbage moth, double-striped pug, large yellow underwing and the lychnis. Also three species of Coleophora case-bearers feed exclusively on Dianthus; C. dianthi, C. dianthivora and C. musculella (which feeds exclusively on D. superbus).
Cultivation
Since 1717, dianthus species have been extensively bred and hybridised to produce many thousands of cultivars for garden use and floristry, in all shades of white, pink, yellow and red, with a huge variety of flower shapes and markings. They are often divided into the following main groups:
Border carnations – fully hardy, growing to , large blooms
Perpetual flowering carnations – grown under glass, flowering throughout the year, often used for exhibition purposes, growing to
Malmaison carnations – derived from the variety 'Souvenir de la Malmaison', growing to , grown for their intense "clove" fragrance
Old-fashioned pinks – older varieties; evergreen perennials forming mounds of blue-green foliage with masses of flowers in summer, growing to
Modern pinks – newer varieties, growing to , often blooming two or three times per year
Alpine pinks – mat-forming perennials, suitable for the rockery or alpine garden, growing to
Over 100 varieties have gained the Royal Horticultural Society's Award of Garden Merit.
Culture
In the language of flowers, pink Dianthus symbolize boldness.
Dianthus gratianopolitanus – the Cheddar pink – was chosen as the county flower of Somerset in 2002 following a poll by the wild flora conservation charity Plantlife. Dianthus japonicus is the official flower of Hiratsuka, Kanagawa, Japan.
In Japan, Dianthus superbus – the fringed pink or nadeshiko – is used in the term Yamato nadeshiko to describe the archetype of a traditional ideal woman.
Gallery
| Biology and health sciences | Caryophyllales | Plants |
338705 | https://en.wikipedia.org/wiki/Scientific%20community | Scientific community | The scientific community is a diverse network of interacting scientists. It includes many "sub-communities" working on particular scientific fields, and within particular institutions; interdisciplinary and cross-institutional activities are also significant. Objectivity is expected to be achieved by the scientific method. Peer review, through discussion and debate within journals and conferences, assists in this objectivity by maintaining the quality of research methodology and interpretation of results.
History of scientific communities
The eighteenth century had some societies made up of men who studied nature, also known as natural philosophers and natural historians, which included even amateurs. As such these societies were more like local clubs and groups with diverse interests than actual scientific communities, which usually had interests on specialized disciplines. Though there were a few older societies of men who studied nature such as the Royal Society of London, the concept of scientific communities emerged in the second half of the 19th century, not before, because it was in this century that the language of modern science emerged, the professionalization of science occurred, specialized institutions were created, and the specialization of scientific disciplines and fields occurred.
For instance, the term scientist was first coined by the naturalist-theologian William Whewell in 1834 and the wider acceptance of the term along with the growth of specialized societies allowed for researchers to see themselves as a part of a wider imagined community, similar to the concept of nationhood.
Membership, status and interactions
Membership in the community is generally, but not exclusively, a function of education, employment status, research activity and institutional affiliation. Status within the community is highly correlated with publication record, and also depends on the status within the institution and the status of the institution. Researchers can hold roles of different degrees of influence inside the scientific community. Researchers of a stronger influence can act as mentors for early career researchers and steer the direction of research in the community like agenda setters.
Scientists are usually trained in academia through universities. As such, degrees in the relevant scientific sub-disciplines are often considered prerequisites in the relevant community. In particular, the PhD with its research requirements functions as a marker of being an important integrator into the community, though continued membership is dependent on maintaining connections to other researchers through publication, technical contributions, and conferences. After obtaining a PhD an academic scientist may continue through being on an academic position, receiving a post-doctoral fellowships and onto professorships. Other scientists make contributions to the scientific community in alternate ways such as in industry, education, think tanks, or the government.
Members of the same community do not need to work together. Communication between the members is established by disseminating research work and hypotheses through articles in peer reviewed journals, or by attending conferences where new research is presented and ideas exchanged and discussed. There are also many informal methods of communication of scientific work and results as well. And many in a coherent community may actually not communicate all of their work with one another, for various professional reasons.
Speaking for the scientific community
Unlike in previous centuries when the community of scholars were all members of few learned societies and similar institutions, there are no singular bodies or individuals which can be said today to speak for all science or all scientists. This is partly due to the specialized training most scientists receive in very few fields. As a result, many would lack expertise in all the other fields of the sciences. For instance, due to the increasing complexity of information and specialization of scientists, most of the cutting-edge research today is done by well funded groups of scientists, rather than individuals. However, there are still multiple societies and academies in many countries which help consolidate some opinions and research to help guide public discussions on matters of policy and government-funded research. For example, the United States' National Academy of Sciences (NAS) and United Kingdom's Royal Society sometimes act as surrogates when the opinions of the scientific community need to be ascertained by policy makers or the national government, but the statements of the National Academy of Science or the Royal Society are not binding on scientists nor do they necessarily reflect the opinions of every scientist in a given community since membership is often exclusive, their commissions are explicitly focused on serving their governments, and they have never "shown systematic interest in what rank-and-file scientists think about scientific matters". Exclusivity of membership in these types of organizations can be seen in their election processes in which only existing members can officially nominate others for candidacy of membership. It is very unusual for organizations like the National Academy of Science to engage in external research projects since they normally focus on preparing scientific reports for government agencies. An example of how rarely the NAS engages in external and active research can be seen in its struggle to prepare and overcome hurdles, due to its lack of experience in coordinating research grants and major research programs on the environment and health.
Nevertheless, general scientific consensus is a concept which is often referred to when dealing with questions that can be subject to scientific methodology. While the consensus opinion of the community is not always easy to ascertain or fix due to paradigm shifting, generally the standards and utility of the scientific method have tended to ensure, to some degree, that scientists agree on some general corpus of facts explicated by scientific theory while rejecting some ideas which run counter to this realization. The concept of scientific consensus is very important to science pedagogy, the evaluation of new ideas, and research funding. Sometimes it is argued that there is a closed shop bias within the scientific community toward new ideas. Protoscience, fringe science, and pseudoscience have been topics that discuss demarcation problems. In response to this some non-consensus claims skeptical organizations, not research institutions, have devoted considerable amounts of time and money contesting ideas which run counter to general agreement on a particular topic.
Philosophers of science argue over the epistemological limits of such a consensus and some, including Thomas Kuhn, have pointed to the existence of scientific revolutions in the history of science as being an important indication that scientific consensus can, at times, be wrong. Nevertheless, the sheer explanatory power of science in its ability to make accurate and precise predictions and aid in the design and engineering of new technology has ensconced "science" and, by proxy, the opinions of the scientific community as a highly respected form of knowledge both in the academy and in popular culture.
Political controversies
The high regard with which scientific results are held in Western society has caused a number of political controversies over scientific subjects to arise. An alleged conflict thesis proposed in the 19th century between religion and science has been cited by some as representative of a struggle between tradition and substantial change and faith and reason.. A popular example used to support this thesis is when Galileo was tried before the Inquisition concerning the heliocentric model. The persecution began after Pope Urban VIII permitted Galileo to write about the Copernican model. Galileo had used arguments from the Pope and put them in the voice of the simpleton in the work "Dialogue Concerning the Two Chief World Systems" which caused great offense to him. Even though many historians of science have discredited the conflict thesis it still remains a popular belief among many including some scientists. In more recent times, the creation–evolution controversy has resulted in many religious believers in a supernatural creation to challenge some naturalistic assumptions that have been proposed in some of the branches of scientific fields such as evolutionary biology, geology, and astronomy. Although the dichotomy seems to be of a different outlook from a Continental European perspective, it does exist. The Vienna Circle, for instance, had a paramount (i.e. symbolic) influence on the semiotic regime represented by the Scientific Community in Europe.
In the decades following World War II, some were convinced that nuclear power would solve the pending energy crisis by providing energy at low cost. This advocacy led to the construction of many nuclear power plants, but was also accompanied by a global political movement opposed to nuclear power due to safety concerns and associations of the technology with nuclear weapons. Mass protests in the United States and Europe during the 1970s and 1980s along with the disasters of Chernobyl and Three Mile Island led to a decline in nuclear power plant construction.
In the last decades or so, both global warming and stem cells have placed the opinions of the scientific community in the forefront of political debate.
| Physical sciences | Science basics | Basics and measurement |
338946 | https://en.wikipedia.org/wiki/Space%20complexity | Space complexity | The space complexity of an algorithm or a data structure is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.
Similar to time complexity, space complexity is often expressed asymptotically in big O notation, such as
etc., where is a characteristic of the input influencing space complexity.
Space complexity classes
Analogously to time complexity classes DTIME(f(n)) and NTIME(f(n)), the complexity classes DSPACE(f(n)) and NSPACE(f(n)) are the sets of languages that are decidable by deterministic (respectively, non-deterministic) Turing machines that use space. The complexity classes PSPACE and NPSPACE allow to be any polynomial, analogously to P and NP. That is,
and
Relationships between classes
The space hierarchy theorem states that, for all space-constructible functions there exists a problem that can be solved by a machine with memory space, but cannot be solved by a machine with asymptotically less than space.
The following containments between complexity classes hold.
Furthermore, Savitch's theorem gives the reverse containment that if
As a direct corollary, This result is surprising because it suggests that non-determinism can reduce the space necessary to solve a problem only by a small amount. In contrast, the exponential time hypothesis conjectures that for time complexity, there can be an exponential gap between deterministic and non-deterministic complexity.
The Immerman–Szelepcsényi theorem states that, again for is closed under complementation. This shows another qualitative difference between time and space complexity classes, as nondeterministic time complexity classes are not believed to be closed under complementation; for instance, it is conjectured that NP ≠ co-NP.
LOGSPACE
L or LOGSPACE is the set of problems that can be solved by a deterministic Turing machine using only memory space with regards to input size. Even a single counter that can index the entire -bit input requires space, so LOGSPACE algorithms can maintain only a constant number of counters or other variables of similar bit complexity.
LOGSPACE and other sub-linear space complexity is useful when processing large data that cannot fit into a computer's RAM. They are related to Streaming algorithms, but only restrict how much memory can be used, while streaming algorithms have further constraints on how the input is fed into the algorithm.
This class also sees use in the field of pseudorandomness and derandomization, where researchers consider the open problem of whether L = RL.
The corresponding nondeterministic space complexity class is NL.
Auxiliary space complexity
The term refers to space other than that consumed by the input.
Auxiliary space complexity could be formally defined in terms of a Turing machine with a separate input tape which cannot be written to, only read, and a conventional working tape which can be written to.
The auxiliary space complexity is then defined (and analyzed) via the working tape.
For example, consider the depth-first search of a balanced binary tree with nodes: its auxiliary space complexity is
| Mathematics | Complexity theory | null |
339024 | https://en.wikipedia.org/wiki/Length%20contraction | Length contraction | Length contraction is the phenomenon that a moving object's length is measured to be shorter than its proper length, which is the length as measured in the object's own rest frame. It is also known as Lorentz contraction or Lorentz–FitzGerald contraction (after Hendrik Lorentz and George Francis FitzGerald) and is usually only noticeable at a substantial fraction of the speed of light. Length contraction is only in the direction in which the body is travelling. For standard objects, this effect is negligible at everyday speeds, and can be ignored for all regular purposes, only becoming significant as the object approaches the speed of light relative to the observer.
History
Length contraction was postulated by George FitzGerald (1889) and Hendrik Antoon Lorentz (1892) to explain the negative outcome of the Michelson–Morley experiment and to rescue the hypothesis of the stationary aether (Lorentz–FitzGerald contraction hypothesis).
Although both FitzGerald and Lorentz alluded to the fact that electrostatic fields in motion were deformed ("Heaviside-Ellipsoid" after Oliver Heaviside, who derived this deformation from electromagnetic theory in 1888), it was considered an ad hoc hypothesis, because at this time there was no sufficient reason to assume that intermolecular forces behave the same way as electromagnetic ones. In 1897 Joseph Larmor developed a model in which all forces are considered to be of electromagnetic origin, and length contraction appeared to be a direct consequence of this model. Yet it was shown by Henri Poincaré (1905) that electromagnetic forces alone cannot explain the electron's stability. So he had to introduce another ad hoc hypothesis: non-electric binding forces (Poincaré stresses) that ensure the electron's stability, give a dynamical explanation for length contraction, and thus hide the motion of the stationary aether.
Lorentz believed that length contraction represented a physical contraction of the atoms making up an object. He envisioned no fundamental change in the nature of space and time.
Lorentz expected that length contraction would result in compressive strains in an object that should result in measurable effects. Such effects would include optical effects in transparent media, such as optical rotation and induction of double refraction, and the induction of torques on charged condensers moving at an angle with respect to the aether.
Lorentz was perplexed by experiments such as the Trouton–Noble experiment and the experiments of Rayleigh and Brace, which failed to validate his theoretical expectations.
For mathematical consistency, Lorentz proposed a new time variable, the "local time", called that because it depended on the position of a moving body, following the relation . Lorentz considered local time not to be "real"; rather, it represented an ad hoc change of variable.
Impressed by Lorentz's "most ingenious idea", Poincaré saw more in local time than a mere mathematical trick. It represented the actual time that would be shown on a moving observer's clocks. On the other hand, Poincaré did not consider this measured time to be the "true time" that would be exhibited by clocks at rest in the aether. Poincaré made no attempt to redefine the concepts of space and time. To Poincaré, Lorentz transformation described the apparent states of the field for a moving observer. True states remained those defined with respect to the ether.
Albert Einstein (1905) is credited with removing the ad hoc character from the contraction hypothesis, by deriving this contraction from his postulates instead of experimental data. Hermann Minkowski gave the geometrical interpretation of all relativistic effects by introducing his concept of four-dimensional spacetime.
Basis in relativity
First it is necessary to carefully consider the methods for measuring the lengths of resting and moving objects. Here, "object" simply means a distance with endpoints that are always mutually at rest, i.e., that are at rest in the same inertial frame of reference. If the relative velocity between an observer (or his measuring instruments) and the observed object is zero, then the proper length of the object can simply be determined by directly superposing a measuring rod. However, if the relative velocity is greater than zero, then one can proceed as follows:
The observer installs a row of clocks that either are synchronized a) by exchanging light signals according to the Poincaré–Einstein synchronization, or b) by "slow clock transport", that is, one clock is transported along the row of clocks in the limit of vanishing transport velocity. Now, when the synchronization process is finished, the object is moved along the clock row and every clock stores the exact time when the left or the right end of the object passes by. After that, the observer only has to look at the position of a clock A that stored the time when the left end of the object was passing by, and a clock B at which the right end of the object was passing by at the same time. It's clear that distance AB is equal to length of the moving object. Using this method, the definition of simultaneity is crucial for measuring the length of moving objects.
Another method is to use a clock indicating its proper time , which is traveling from one endpoint of the rod to the other in time as measured by clocks in the rod's rest frame. The length of the rod can be computed by multiplying its travel time by its velocity, thus in the rod's rest frame or in the clock's rest frame.
In Newtonian mechanics, simultaneity and time duration are absolute and therefore both methods lead to the equality of and . Yet in relativity theory the constancy of light velocity in all inertial frames in connection with relativity of simultaneity and time dilation destroys this equality. In the first method an observer in one frame claims to have measured the object's endpoints simultaneously, but the observers in all other inertial frames will argue that the object's endpoints were not measured simultaneously. In the second method, times and are not equal due to time dilation, resulting in different lengths too.
The deviation between the measurements in all inertial frames is given by the formulas for Lorentz transformation and time dilation (see Derivation). It turns out that the proper length remains unchanged and always denotes the greatest length of an object, and the length of the same object measured in another inertial reference frame is shorter than the proper length. This contraction only occurs along the line of motion, and can be represented by the relation
where
is the length observed by an observer in motion relative to the object
is the proper length (the length of the object in its rest frame)
is the Lorentz factor, defined as where
is the relative velocity between the observer and the moving object
is the speed of light
Replacing the Lorentz factor in the original formula leads to the relation
In this equation both and are measured parallel to the object's line of movement. For the observer in relative movement, the length of the object is measured by subtracting the simultaneously measured distances of both ends of the object. For more general conversions, see the Lorentz transformations. An observer at rest observing an object travelling very close to the speed of light would observe the length of the object in the direction of motion as very near zero.
Then, at a speed of (30 million mph, 0.0447) contracted length is 99.9% of the length at rest; at a speed of (95 million mph, 0.141), the length is still 99%. As the magnitude of the velocity approaches the speed of light, the effect becomes prominent.
Symmetry
The principle of relativity (according to which the laws of nature are invariant across inertial reference frames) requires that length contraction is symmetrical: If a rod is at rest in an inertial frame , it has its proper length in and its length is contracted in . However, if a rod rests in , it has its proper length in and its length is contracted in . This can be vividly illustrated using symmetric Minkowski diagrams, because the Lorentz transformation geometrically corresponds to a rotation in four-dimensional spacetime.
Magnetic forces
Magnetic forces are caused by relativistic contraction when electrons are moving relative to atomic nuclei. The magnetic force on a moving charge next to a current-carrying wire is a result of relativistic motion between electrons and protons.
In 1820, André-Marie Ampère showed that parallel wires having currents in the same direction attract one another. In the electrons' frame of reference, the moving wire contracts slightly, causing the protons of the opposite wire to be locally denser. As the electrons in the opposite wire are moving as well, they do not contract (as much). This results in an apparent local imbalance between electrons and protons; the moving electrons in one wire are attracted to the extra protons in the other. The reverse can also be considered. To the static proton's frame of reference, the electrons are moving and contracted, resulting in the same imbalance. The electron drift velocity is relatively very slow, on the order of a meter an hour but the force between an electron and proton is so enormous that even at this very slow speed the relativistic contraction causes significant effects.
This effect also applies to magnetic particles without current, with current being replaced with electron spin.
Experimental verifications
Any observer co-moving with the observed object cannot measure the object's contraction, because he can judge himself and the object as at rest in the same inertial frame in accordance with the principle of relativity (as it was demonstrated by the Trouton–Rankine experiment). So length contraction cannot be measured in the object's rest frame, but only in a frame in which the observed object is in motion. In addition, even in such a non-co-moving frame, direct experimental confirmations of length contraction are hard to achieve, because (a) at the current state of technology, objects of considerable extension cannot be accelerated to relativistic speeds, and (b) the only objects traveling with the speed required are atomic particles, whose spatial extensions are too small to allow a direct measurement of contraction.
However, there are indirect confirmations of this effect in a non-co-moving frame:
It was the negative result of a famous experiment, that required the introduction of length contraction: the Michelson–Morley experiment (and later also the Kennedy–Thorndike experiment). In special relativity its explanation is as follows: In its rest frame the interferometer can be regarded as at rest in accordance with the relativity principle, so the propagation time of light is the same in all directions. Although in a frame in which the interferometer is in motion, the transverse beam must traverse a longer, diagonal path with respect to the non-moving frame thus making its travel time longer, the factor by which the longitudinal beam would be delayed by taking times L/(c−v) and L/(c+v) for the forward and reverse trips respectively is even longer. Therefore, in the longitudinal direction the interferometer is supposed to be contracted, in order to restore the equality of both travel times in accordance with the negative experimental result(s). Thus the two-way speed of light remains constant and the round trip propagation time along perpendicular arms of the interferometer is independent of its motion & orientation.
Given the thickness of the atmosphere as measured in Earth's reference frame, muons' extremely short lifespan shouldn't allow them to make the trip to the surface, even at the speed of light, but they do nonetheless. From the Earth reference frame, however, this is made possible only by the muon's time being slowed down by time dilation. However, in the muon's frame, the effect is explained by the atmosphere being contracted, shortening the trip.
Heavy ions that are spherical when at rest should assume the form of "pancakes" or flat disks when traveling nearly at the speed of lightand in fact, the results obtained from particle collisions can only be explained when the increased nucleon density due to length contraction is considered.
The ionization ability of electrically charged particles with large relative velocities is higher than expected. In pre-relativistic physics the ability should decrease at high velocities, because the time in which ionizing particles in motion can interact with the electrons of other atoms or molecules is diminished; however, in relativity, the higher-than-expected ionization ability can be explained by length contraction of the Coulomb field in frames in which the ionizing particles are moving, which increases their electrical field strength normal to the line of motion.
In synchrotrons and free-electron lasers, relativistic electrons were injected into an undulator, so that synchrotron radiation is generated. In the proper frame of the electrons, the undulator is contracted which leads to an increased radiation frequency. Additionally, to find out the frequency as measured in the laboratory frame, one has to apply the relativistic Doppler effect. So, only with the aid of length contraction and the relativistic Doppler effect, the extremely small wavelength of undulator radiation can be explained.
Reality of length contraction
In 1911 Vladimir Varićak asserted that one sees the length contraction in an objective way, according to Lorentz, while it is "only an apparent, subjective phenomenon, caused by the manner of our clock-regulation and length-measurement", according to Einstein. Einstein published a rebuttal:
Einstein also argued in that paper, that length contraction is not simply the product of arbitrary definitions concerning the way clock regulations and length measurements are performed. He presented the following thought experiment: Let A'B' and A"B" be the endpoints of two rods of the same proper length L0, as measured on x' and x" respectively. Let them move in opposite directions along the x* axis, considered at rest, at the same speed with respect to it. Endpoints A'A" then meet at point A*, and B'B" meet at point B*. Einstein pointed out that length A*B* is shorter than A'B' or A"B", which can also be demonstrated by bringing one of the rods to rest with respect to that axis.
Paradoxes
Due to superficial application of the contraction formula, some paradoxes can occur. Examples are the ladder paradox and Bell's spaceship paradox. However, those paradoxes can be solved by a correct application of the relativity of simultaneity. Another famous paradox is the Ehrenfest paradox, which proves that the concept of rigid bodies is not compatible with relativity, reducing the applicability of Born rigidity, and showing that for a co-rotating observer the geometry is in fact non-Euclidean.
Visual effects
Length contraction refers to measurements of position made at simultaneous times according to a coordinate system. This could suggest that if one could take a picture of a fast moving object, that the image would show the object contracted in the direction of motion. However, such visual effects are completely different measurements, as such a photograph is taken from a distance, while length contraction can only directly be measured at the exact location of the object's endpoints. It was shown by several authors such as Roger Penrose and James Terrell that moving objects generally do not appear length contracted on a photograph. This result was popularized by Victor Weisskopf in a Physics Today article. For instance, for a small angular diameter, a moving sphere remains circular and is rotated. This kind of visual rotation effect is called Penrose-Terrell rotation.
Derivation
Length contraction can be derived in several ways:
Known moving length
In an inertial reference frame S, let and denote the endpoints of an object in motion. In this frame the object's length is measured, according to the above conventions, by determining the simultaneous positions of its endpoints at . Meanwhile, the proper length of this object, as measured in its rest frame S', can be calculated by using the Lorentz transformation. Transforming the time coordinates from S into S' results in different times, but this is not problematic, since the object is at rest in S' where it does not matter when the endpoints are measured. Therefore, the transformation of the spatial coordinates suffices, which gives:
Since , and by setting and , the proper length in S' is given by
Therefore, the object's length, measured in the frame S, is contracted by a factor :
Likewise, according to the principle of relativity, an object that is at rest in S will also be contracted in S'. By exchanging the above signs and primes symmetrically, it follows that
Thus an object at rest in S, when measured in S', will have the contracted length
Known proper length
Conversely, if the object rests in S and its proper length is known, the simultaneity of the measurements at the object's endpoints has to be considered in another frame S', as the object constantly changes its position there. Therefore, both spatial and temporal coordinates must be transformed:
Computing length interval as well as assuming simultaneous time measurement , and by plugging in proper length , it follows:
Equation (2) gives
which, when plugged into (1), demonstrates that becomes the contracted length :
.
Likewise, the same method gives a symmetric result for an object at rest in S':
.
Using time dilation
Length contraction can also be derived from time dilation, according to which the rate of a single "moving" clock (indicating its proper time ) is lower with respect to two synchronized "resting" clocks (indicating ). Time dilation was experimentally confirmed multiple times, and is represented by the relation:
Suppose a rod of proper length at rest in and a clock at rest in are moving along each other with speed . Since, according to the principle of relativity, the magnitude of relative velocity is the same in either reference frame, the respective travel times of the clock between the rod's endpoints are given by in and in , thus and . By inserting the time dilation formula, the ratio between those lengths is:
.
Therefore, the length measured in is given by
So since the clock's travel time across the rod is longer in than in (time dilation in ), the rod's length is also longer in than in (length contraction in ). Likewise, if the clock were at rest in and the rod in , the above procedure would give
Geometrical considerations
Additional geometrical considerations show that length contraction can be regarded as a trigonometric phenomenon, with analogy to parallel slices through a cuboid before and after a rotation in E3 (see left half figure at the right). This is the Euclidean analog of boosting a cuboid in E1,2. In the latter case, however, we can interpret the boosted cuboid as the world slab of a moving plate.
Image: Left: a rotated cuboid in three-dimensional euclidean space E3. The cross section is longer in the direction of the rotation than it was before the rotation. Right: the world slab of a moving thin plate in Minkowski spacetime (with one spatial dimension suppressed) E1,2, which is a boosted cuboid. The cross section is thinner in the direction of the boost than it was before the boost. In both cases, the transverse directions are unaffected and the three planes meeting at each corner of the cuboids are mutually orthogonal (in the sense of E1,2 at right, and in the sense of E3 at left).
In special relativity, Poincaré transformations are a class of affine transformations which can be characterized as the transformations between alternative Cartesian coordinate charts on Minkowski spacetime corresponding to alternative states of inertial motion (and different choices of an origin). Lorentz transformations are Poincaré transformations which are linear transformations (preserve the origin). Lorentz transformations play the same role in Minkowski geometry (the Lorentz group forms the isotropy group of the self-isometries of the spacetime) which are played by rotations in euclidean geometry. Indeed, special relativity largely comes down to studying a kind of noneuclidean trigonometry in Minkowski spacetime, as suggested by the following table:
| Physical sciences | Theory of relativity | Physics |
339130 | https://en.wikipedia.org/wiki/Wootz%20steel | Wootz steel | Wootz steel is a crucible steel characterized by a pattern of bands and high carbon content. These bands are formed by sheets of microscopic carbides within a tempered martensite or pearlite matrix in higher-carbon steel, or by ferrite and pearlite banding in lower-carbon steels. It was a pioneering steel alloy developed in southern India in the mid-1st millennium BC and exported globally.
History
Wootz steel originated in the mid-1st millennium BC in India, wootz steel was made in Golconda in Telangana, Karnataka and Sri Lanka. The steel was exported as cakes of steely iron that came to be known as "wootz". The method was to heat black magnetite ore in the presence of carbon in a sealed clay crucible inside a charcoal furnace to completely remove slag. An alternative was to smelt the ore first to give wrought iron, then heat and hammer it to remove slag. The carbon source was bamboo and leaves from plants such as Avārai. Locals in Sri Lanka adopted the production methods of creating wootz steel from the Cheras by the 5th century BC. In Sri Lanka, this early steel-making method employed a unique wind furnace, driven by the monsoon winds. Production sites from antiquity have emerged, in places such as Anuradhapura, Tissamaharama and Samanalawewa, as well as imported artifacts of ancient iron and steel from Kodumanal. Recent archaeological excavations (2018) of the Yodhawewa site (in Mannar District) discovered the lower half of a spherical furnace, crucible fragments, and lid fragments related to the crucible steel production through the carburization process. In the South East of Sri Lanka, there were some of the oldest iron and steel artifacts and production processes to the island from the classical period.
Trade between India and Sri Lanka through the Arabian Sea introduced wootz steel to Arabia. The term muhannad مهند or hendeyy هندي in pre-Islamic and early Islamic Arabic refers to sword blades made from Indian steel, which were highly prized, and are attested in Arabic poetry. Further trade spread the technology to the city of Damascus, where an industry developed for making weapons of this steel. This led to the development of Damascus steel. The 12th century Arab traveler Edrisi mentioned the "Hinduwani" or Indian steel as the best in the world. Arab accounts also point to the fame of 'Teling' steel, which can be taken to refer to the region of Telangana. The Golconda region of Telangana clearly being the nodal center for the export of wootz steel to West Asia.
Another sign of its reputation is seen in a Persian phraseto give an "Indian answer", meaning "a cut with an Indian sword". Wootz steel was widely exported and traded throughout ancient Europe and the Arab world, and became particularly famous in the Middle East.
Development of modern metallurgy
From the 17th century onwards, several European travelers observed the steel manufacturing in South India, at Mysore, Malabar and Golconda. The word "wootz" appears to have originated as a mistranscription of Sanskrit terms; the Sanskrit root word for the alloy is utsa. Anothertheory says that the word is a variation of uchcha or ucha ("superior"). According to one theory, the word ukku is based on the meaning "melt, dissolve". Other Dravidian languages have similar-sounding words for steel: ukku in Kannada and Telugu, and urukku in Malayalam. When Benjamin Heyne inspected the Indian steel in Ceded Districts and other Kannada-speaking areas, he was informed that the steel was ucha kabbina ("superior iron"), also known as ukku tundu in Mysore.
Legends of wootz steel and Damascus swords aroused the curiosity of the European scientific community from the 17th to the 19th century. The use of high-carbon alloys was little known in Europe previously and thus the research into wootz steel played an important role in the development of modern English, French and Russian metallurgy.
In 1790, samples of wootz steel were received by Sir Joseph Banks, president of the British Royal Society, sent by Helenus Scott. These samples were subjected to scientific examination and analysis by several experts.
Specimens of daggers and other weapons were sent by the Rajas of India to the Great Exhibition in London in 1851 and 1862 International Exhibition. Though the arms of the swords were beautifully decorated and jeweled, they were most highly prized for the quality of their steel. The swords of the Sikhs were said to bear bending and crumpling, and yet be fine and sharp.
Characteristics
Wootz is characterized by a pattern caused by bands of clustered particles made by melting of low levels of carbide-forming elements. Wootz contains greater carbonaceous matter than common qualities of cast steel.
The distinct patterns of wootz steel that can be made through forging are wave, ladder, and rose patterns with finely spaced bands. However, with hammering, dyeing, and etching further customized patterns were made.
The presence of cementite nanowires and carbon nanotubes has been identified by Peter Pepler of TU Dresden in the microstructure of wootz steel. There is a possibility of an abundance of ultrahard metallic carbides in the steel matrix precipitating out in bands. Wootz swords were renowned for their sharpness and toughness.
Composition
T. H. Henry analyzed and recorded the composition of wootz steel samples provided by the Royal School of Mines. Recording:
Carbon (Combined) 1.34%
Carbon (Uncombined) 0.31%
Sulfur 0.17%
Silicon 0.04%
Arsenic 0.03%
Wootz steel was analyzed by Michael Faraday and recorded to contain 0.01-0.07% aluminium. Faraday, Messrs (et al.), and Stodart hypothesized that aluminium was needed in the steel and was important in forming the excellent properties of wootz steel. However T. H. Henry deduced that presence of aluminium in the wootz used by these studies was due to slag, forming as silicates. Percy later reiterated that the quality of wootz steel does not depend on the presence of aluminium.
Reproduction research
Wootz steel has been reproduced and studied in depth by the Royal School of Mines. Dr. Pearson was the first to chemically examine wootz in 1795 and he published his contributions to the Philosophical Transactions of the Royal Society.
Russian metallurgist Pavel Petrovich Anosov (see Bulat steel) was almost able to reproduce ancient wootz steel with nearly all of its properties and the steel he created was very similar to traditional wootz. He documented four different methods of producing wootz steel that exhibited traditional patterns. He died before he could fully document and publish his research. Oleg Sherby and Jeff Wadsworth and Lawrence Livermore National Laboratory have all done research, attempting to create steels with characteristics similar to wootz, but without success. J.D Verhoeven and Alfred Pendray reconstructed methods of production, proved the role of impurities of ore in the pattern creation, and reproduced wootz steel with patterns microscopically and visually identical to one of the ancient blade patterns. Reibold et al.'s analyses spoke of the presence of carbon nanotubes enclosing nanowires of cementite, with the trace elements/impurities of vanadium, molybdenum, chromium etc. contributing to their creation, in cycles of heating/cooling/forging. This resulted in a hard high carbon steel that remained malleable
There are smiths who are now consistently producing wootz steel blades visually identical to the old patterns. Steel manufactured in Kutch (in present-day India) particularly enjoyed a widespread reputation, similar to those manufactured at Glasgow and Sheffield.
Wootz was made over nearly a 2,000-year period (the oldest sword samples date to around 200 CE) and the methods of production of ingots, the ingredients, and the methods of forging varied from one area to the next. Some wootz blades displayed a pattern, while some did not. Heat treating was quite different from forging, and there were many different patterns that were created by the various smiths who spanned from China to Scandinavia.
With fellow experts, the Georgian-Dutch master armourer Gocha Laghidze developed a new method to reintroduce 'Georgian Damascus steel'. In 2010, he and his colleagues gave a masterclass on this at the Royal Academy of Fine Arts in Antwerp.
| Physical sciences | Iron alloys | Chemistry |
339399 | https://en.wikipedia.org/wiki/Weevil | Weevil | Weevils are beetles belonging to the superfamily Curculionoidea, known for their elongated snouts. They are usually small – less than in length – and herbivorous. Approximately 97,000 species of weevils are known. They belong to several families, with most of them in the family Curculionidae (the true weevils). It also includes bark beetles, which while morphologically dissimilar to other weevils in lacking the distinctive snout, is a subfamily of Curculionidae. Some other beetles, although not closely related, bear the name "weevil", such as the leaf beetle subfamily Bruchinae, known as "bean weevils", or the biscuit weevil (Stegobium paniceum), which belongs to the family Ptinidae.
Many weevils are considered pests because of their ability to damage and kill crops. The grain or wheat weevil (Sitophilus granarius) damages stored grain, as does the maize weevil (Sitophilus zeamais), among others. The boll weevil (Anthonomus grandis) attacks cotton crops; it lays its eggs inside cotton bolls and the larvae eat their way out. Other weevils are used for biological control of invasive plants.
A weevil's rostrum, or elongated snout, hosts chewing mouthparts instead of the piercing mouthparts that proboscis-possessing insects are known for. The mouthparts are often used to excavate tunnels into grains. In more derived weevils, the rostrum has a groove in which the weevil can fold the first segment of its antennae.
Most weevils have the ability to fly (including pest species such as the rice weevil), though a significant number are flightless, such as the genus Otiorhynchus, and others can jump.
One species of weevil, Austroplatypus incompertus, exhibits eusociality, one of the few insects outside the Hymenoptera and the Isoptera to do so.
Taxonomy and phylogeny
Because so many species exist in such diversity, the higher classification of weevils is in a state of flux. They are generally divided into two major divisions, the Orthoceri or primitive weevils, and the Gonatoceri or true weevils (Curculionidae). E. C. Zimmerman proposed a third division, the Heteromorphi, for several intermediate forms. Primitive weevils are distinguished by having straight antennae, while true weevils have elbowed (geniculate) antennae. The elbow occurs at the end of the scape (first antennal segment) in true weevils, and the scape is usually much longer than the other antennal segments. Some exceptions occur, such as Nanophyini, primitive weevils with long scapes and geniculate antennae, while among the true weevils, Gonipterinae and Ramphus have short scapes and little or no "elbow".
A 1995 classification system to family level was provided by Kuschel, with updates from Marvaldi et al. in 2002, and was achieved using phylogenetic analyses. The accepted families were the primitive weevils, Anthribidae, Attelabidae, Belidae, Brentidae, Caridae, and Nemonychidae, and the true weevils Curculionidae. Most other weevil families were demoted to subfamilies or tribes. Further work resulted in the elevation of Cimberididae to family from placement as a subfamily of Nemonychidae in 2017 and the recognition of the Cretaceous age family Mesophyletidae in 2018 from Burmese amber. The oldest weevils date to the Middle-Late Jurassic boundary, found in the Karabastau Formation of Kazakhstan, the Shar-Teg locality of Mongolia, the Daohugou locality in Inner Mongolia, China, and the Talbragar site in Australia. The extinct family Obrieniidae, with species dating from the Ladinian stage of the Triassic through to tentatively the Oxfordian, have sometimes been considered weevils. Genera of the family have only been found in three formations in Kazakhstan, with most named in 1993. However, their phylogenetic position is contested, with others considering it part of Archostemata.
The interfamilial relationships of Curculionoidea have been generally well resolved. The phylogeny by Li et al. (2023) based on phylogenomic data is suggested below:
Families
Anthribidae—fungus weevils
Attelabidae—leaf rolling weevils
Belidae—primitive weevils
Brentidae—straight snout weevils
Caridae
Cimberididae
Curculionidae—true weevils
Mesophyletidae
Nemonychidae—pine flower weevils
?Obrieniidae
Sexual dimorphism
Rhopalapion longirostre exhibits an extreme case of sexual dimorphism. The female rostrum is twice as long and its surface is smoother than in the male. The female bores egg channels into the buds of Alcea rosea. Thus, the dimorphism is not attributed to sexual selection. It is a response to ecological demands of egg deposition.
Another example of extreme dimorphism in weevils is that of the New Zealand giraffe weevil. Males measure up to and females , although there is an extreme range of body sizes in both sexes.
| Biology and health sciences | Beetles (Coleoptera) | null |
339488 | https://en.wikipedia.org/wiki/Cloning%20vector | Cloning vector | A cloning vector is a small piece of DNA that can be stably maintained in an organism, and into which a foreign DNA fragment can be inserted for cloning purposes. The cloning vector may be DNA taken from a virus, the cell of a higher organism, or it may be the plasmid of a bacterium. The vector contains features that allow for the convenient insertion of a DNA fragment into the vector or its removal from the vector, for example through the presence of restriction sites. The vector and the foreign DNA may be treated with a restriction enzyme that cuts the DNA, and DNA fragments thus generated contain either blunt ends or overhangs known as sticky ends, and vector DNA and foreign DNA with compatible ends can then be joined by molecular ligation. After a DNA fragment has been cloned into a cloning vector, it may be further subcloned into another vector designed for more specific use.
There are many types of cloning vectors, but the most commonly used ones are genetically engineered plasmids. Cloning is generally first performed using Escherichia coli, and cloning vectors in E. coli include plasmids, bacteriophages (such as phage λ), cosmids, and bacterial artificial chromosomes (BACs). Some DNA, however, cannot be stably maintained in E. coli, for example very large DNA fragments, and other organisms such as yeast may be used. Cloning vectors in yeast include yeast artificial chromosomes (YACs).
Features of a cloning vector
All commonly used cloning vectors in molecular biology have key features necessary for their function, such as a suitable cloning site and selectable marker. Others may have additional features specific to their use. For reason of ease and convenience, cloning is often performed using E. coli. Thus, the cloning vectors used often have elements necessary for their propagation and maintenance in E. coli, such as a functional origin of replication (ori). The ColE1 origin of replication is found in many plasmids. Some vectors also include elements that allow them to be maintained in another organism in addition to E. coli, and these vectors are called shuttle vector.
Cloning site
All cloning vectors have features that allow a gene to be conveniently inserted into the vector or removed from it. This may be a multiple cloning site (MCS) or polylinker, which contains many unique restriction sites. The restriction sites in the MCS are first cleaved by restriction enzymes, then a PCR-amplified target gene also digested with the same enzymes is ligated into the vectors using DNA ligase. The target DNA sequence can be inserted into the vector in a specific direction if so desired. The restriction sites may be further used for sub-cloning into another vector if necessary.
Other cloning vectors may use topoisomerase instead of ligase and cloning may be done more rapidly without the need for restriction digest of the vector or insert. In this TOPO cloning method a linearized vector is activated by attaching topoisomerase I to its ends, and this "TOPO-activated" vector may then accept a PCR product by ligating both the 5' ends of the PCR product, releasing the topoisomerase and forming a circular vector in the process. Another method of cloning without the use of DNA digest and ligase is by DNA recombination, for example as used in the Gateway cloning system. The gene, once cloned into the cloning vector (called entry clone in this method), may be conveniently introduced into a variety of expression vectors by recombination.
Selectable marker
A selectable marker is carried by the vector to allow the selection of positively transformed cells. Antibiotic resistance is often used as marker, an example being the beta-lactamase gene, which confers resistance to the penicillin group of beta-lactam antibiotics like ampicillin. Some vectors contain two selectable markers, for example the plasmid pACYC177 has both ampicillin and kanamycin resistance gene. Shuttle vector which is designed to be maintained in two different organisms may also require two selectable markers, although some selectable markers such as resistance to zeocin and hygromycin B are effective in different cell types. Auxotrophic selection markers that allow an auxotrophic organism to grow in minimal growth medium may also be used; examples of these are LEU2 and URA3 which are used with their corresponding auxotrophic strains of yeast.
Another kind of selectable marker allows for the positive selection of plasmid with cloned gene. This may involve the use of a gene lethal to the host cells, such as barnase, Ccda, and the parD/parE toxins. This typically works by disrupting or removing the lethal gene during the cloning process, and unsuccessful clones where the lethal gene still remains intact would kill the host cells, therefore only successful clones are selected.
Reporter gene
Reporter genes are used in some cloning vectors to facilitate the screening of successful clones by using features of these genes that allow successful clone to be easily identified. Such features present in cloning vectors may be the lacZα fragment for α complementation in blue-white selection, and/or marker gene or reporter genes in frame with and flanking the MCS to facilitate the production of fusion proteins. Examples of fusion partners that may be used for screening are the green fluorescent protein (GFP) and luciferase.
Elements for expression
A cloning vector need not contain suitable elements for the expression of a cloned target gene, such as a promoter and ribosomal binding site (RBS), many however do, and may then work as an expression vector. The target DNA may be inserted into a site that is under the control of a particular promoter necessary for the expression of the target gene in the chosen host. Where the promoter is present, the expression of the gene is preferably tightly controlled and inducible so that proteins are only produced when required. Some commonly used promoters are the T7 and lac promoters. The presence of a promoter is necessary when screening techniques such as blue-white selection are used.
Cloning vectors without promoter and RBS for the cloned DNA sequence are sometimes used, for example when cloning genes whose products are toxic to E. coli cells. Promoter and RBS for the cloned DNA sequence are also unnecessary when first making a genomic or cDNA library of clones since the cloned genes are normally subcloned into a more appropriate expression vector if their expression is required.
Some vectors are designed for transcription only with no heterologous protein expressed, for example for in vitro mRNA production. These vectors are called transcription vectors. They may lack the sequences necessary for polyadenylation and termination, therefore may not be used for protein production.
Types of cloning vectors
A large number of cloning vectors are available, and choosing the vector may depend upon a number of factors, such as the size of the insert, copy number and cloning method. Large insert may not be stably maintained in a general cloning vector, especially for those with a high copy number, therefore cloning large fragments may require more specialised cloning vector.
Plasmid
Plasmids are autonomously replicating circular extra-chromosomal DNA. They are the standard cloning vectors and the ones most commonly used. Most general plasmids may be used to clone DNA inserts of up to 15 kb in size. One of the earliest commonly used cloning vectors is the pBR322 plasmid. Other cloning vectors include the pUC series of plasmids, and a large number of different cloning plasmid vectors are available. Many plasmids have high copy numbers, for example, pUC19 has a copy number of 500-700 copies per cell, and high copy number is useful as it produces greater yield of recombinant plasmid for subsequent manipulation. However low-copy-number plasmids may be preferably used in certain circumstances, for example, when the protein from the cloned gene is toxic to the cells.
Some plasmids contain an M13 bacteriophage origin of replication and may be used to generate single-stranded DNA. These are called phagemids, and examples are the pBluescript series of cloning vectors.
Bacteriophage
The bacteriophages used for cloning are the λ phage and M13 phage. There is an upper limit on the amount of DNA that can be packed into a phage (a maximum of 53 kb), therefore to allow foreign DNA to be inserted into phage DNA, phage cloning vectors may need to have some non-essential genes deleted, for example the genes for lysogeny since using phage λ as a cloning vector involves only the lytic cycle. There are two kinds of λ phage vectors - insertion vector and replacement vector. Insertion vectors contain a unique cleavage site whereby foreign DNA with size of 5–11 kb may be inserted. In replacement vectors, the cleavage sites flank a region containing genes not essential for the lytic cycle, and this region may be deleted and replaced by the DNA insert in the cloning process, and a larger sized DNA of 8–24 kb may be inserted.
There is also a lower size limit for DNA that can be packed into a phage, and vector DNA that is too small cannot be properly packaged into the phage. This property can be used for selection - vector without insert may be too small, therefore only vectors with insert may be selected for propagation.
Cosmid
Cosmids are plasmids that incorporate a segment of bacteriophage λ DNA that has the cohesive end site (cos) which contains elements required for packaging DNA into λ particles. Under apt origin of replication (ori), it can replicate as a plasmid. It is normally used to clone large DNA fragments between 28 and 45 Kb.
Bacterial artificial chromosome
Insert size of up to 350 kb can be cloned in bacterial artificial chromosome (BAC). BACs are maintained in E. coli with a copy number of only 1 per cell. BACs are based on F plasmid, another artificial chromosome called the PAC is based on the P1 phage.
Yeast artificial chromosome
Yeast artificial chromosome are used as vectors to clone DNA fragments of more than 1 mega base (1Mb=1000kb) in size. They are useful in cloning larger DNA fragments as required in mapping genomes such as in the Human Genome Project. It contains a telomeric sequence, an autonomously replicating sequence (features required to replicate linear chromosomes in yeast cells). These vectors also contain suitable restriction sites to clone foreign DNA as well as genes to be used as selectable markers.
Human artificial chromosome
Human artificial chromosome may be potentially useful as a gene transfer vectors for gene delivery into human cells, and a tool for expression studies and determining human chromosome function. It can carry very large DNA fragment (there is no upper limit on size for practical purposes), therefore it does not have the problem of limited cloning capacity of other vectors, and it also avoids possible insertional mutagenesis caused by integration into host chromosomes by viral vector.
Animal and plant viral vectors
Viruses that infect plant and animal cells have also been manipulated to introduce foreign genes into plant and animal cells. The natural ability of viruses to adsorb to cells, introduce their DNA and replicate have made them ideal vehicles to transfer foreign DNA into eukaryotic cells in culture. A vector based on Simian virus 40 (SV40) was used in first cloning experiment involving mammalian cells. A number of vectors based on other type of viruses like Adenoviruses and Papilloma virus have been used to clone genes in mammals. At present, retroviral vectors are popular for cloning genes in mammalian cells. In case of plants like Cauliflower mosaic virus, Tobacco mosaic virus and Gemini viruses have been used with limited success.
Screening: example of the blue/white screen
Many general purpose vectors such as pUC19 usually include a system for detecting the presence of a cloned DNA fragment, based on the loss of an easily scored phenotype. The most widely used is the gene coding for E. coli β-galactosidase, whose activity can easily be detected by the ability of the enzyme it encodes to hydrolyze the soluble, colourless substrate X-gal (5-bromo-4-chloro-3-indolyl-beta-d-galactoside) into an insoluble, blue product (5,5'-dibromo-4,4'-dichloro indigo). Cloning a fragment of DNA within the vector-based lacZα sequence of the β-galactosidase prevents the production of an active enzyme. If X-gal is included in the selective agar plates, transformant colonies are generally blue in the case of a vector with no inserted DNA and white in the case of a vector containing a fragment of cloned DNA.
| Biology and health sciences | Molecular biology | Biology |
339542 | https://en.wikipedia.org/wiki/Semiprime | Semiprime | In mathematics, a semiprime is a natural number that is the product of exactly two prime numbers. The two primes in the product may equal each other, so the semiprimes include the squares of prime numbers.
Because there are infinitely many prime numbers, there are also infinitely many semiprimes. Semiprimes are also called biprimes, since they include two primes, or second numbers, by analogy with how "prime" means "first".
Examples and variations
The semiprimes less than 100 are:
Semiprimes that are not square numbers are called discrete, distinct, or squarefree semiprimes:
The semiprimes are the case of the -almost primes, numbers with exactly prime factors. However some sources use "semiprime" to refer to a larger set of numbers, the numbers with at most two prime factors (including unit (1), primes, and semiprimes). These are:
Formula for number of semiprimes
A semiprime counting formula was discovered by E. Noel and G. Panos in 2005. Let denote the number of semiprimes less than or equal to n. Then
where is the prime-counting function and denotes the kth prime.
Properties
Semiprime numbers have no composite numbers as factors other than themselves. For example, the number 26 is semiprime and its only factors are 1, 2, 13, and 26, of which only 26 is composite.
For a squarefree semiprime (with )
the value of Euler's totient function (the number of positive integers less than or equal to that are relatively prime to ) takes the simple form
This calculation is an important part of the application of semiprimes in the RSA cryptosystem.
For a square semiprime , the formula is again simple:
Applications
Semiprimes are highly useful in the area of cryptography and number theory, most notably in public key cryptography, where they are used by RSA and pseudorandom number generators such as Blum Blum Shub. These methods rely on the fact that finding two large primes and multiplying them together (resulting in a semiprime) is computationally simple, whereas finding the original factors appears to be difficult. In the RSA Factoring Challenge, RSA Security offered prizes for the factoring of specific large semiprimes and several prizes were awarded. The original RSA Factoring Challenge was issued in 1991, and was replaced in 2001 by the New RSA Factoring Challenge, which was later withdrawn in 2007.
In 1974 the Arecibo message was sent with a radio signal aimed at a star cluster. It consisted of binary digits intended to be interpreted as a bitmap image. The number was chosen because it is a semiprime and therefore can be arranged into a rectangular image in only two distinct ways (23 rows and 73 columns, or 73 rows and 23 columns).
| Mathematics | Prime numbers | null |
339612 | https://en.wikipedia.org/wiki/Tomahawk | Tomahawk | A tomahawk is a type of single-handed axe used by the many Indigenous peoples and nations of North America. It traditionally resembles a hatchet with a straight shaft. In pre-colonial times the head was made of stone, bone, or antler, and European settlers later introduced heads of iron and steel. The term came into the English language in the 17th century as an adaptation of the Powhatan (Virginian Algonquian) word.
Tomahawks were general-purpose tools used by Native Americans and later the European colonials with whom they traded, and often employed as a hand-to-hand weapon. The metal tomahawk heads were originally based on a Royal Navy boarding axe (a lightweight hand axe designed to cut through boarding nets when boarding hostile ships) and used as a trade-item with Native Americans for food and other provisions.
Etymology
The name comes from Powhatan , derived from the Proto-Algonquian root 'to cut off by tool'. Algonquian cognates include Lenape , Malecite-Passamaquoddy , and Abenaki , all of which mean 'axe'.
History
The Algonquian people created the tomahawk. Before Europeans came to the continent, Native Americans would use stones, sharpened by a process of knapping and pecking, attached to wooden handles, secured with strips of rawhide. The tomahawk quickly spread from the Algonquian culture to the tribes of the South and the Great Plains.
Native Americans created a tomahawk’s poll, the side opposite the blade, which consisted of a hammer, spike or pipe. These became known as pipe tomahawks, which consisted of a bowl on the poll and a hollowed out shaft. These were created by European and American artisans for trade and diplomatic gifts for the tribes.
Composition
The tomahawk's original designs were fitted with heads of bladed or rounded stone or deer antler.
According to Mike Haskew, the modern tomahawk shaft is usually less than in length, traditionally made of hickory, ash, or maple. The heads weigh anywhere from , with a cutting edge usually not much longer than from toe to heel. The poll can feature a hammer, spike, or may simply be rounded off, and they usually do not have lugs. From the 1800s onward, these sometimes had a pipe-bowl carved into the poll, and a hole drilled down the center of the shaft for smoking tobacco through the metal head. Pipe tomahawks are artifacts unique to North America, created by Europeans as trade objects but often exchanged as diplomatic gifts. They were symbols of the choice Europeans and Native Americans faced whenever they met: one end was the pipe of peace, the other an axe of war.
In colonial French territory, a different tomahawk design, closer to the ancient European francisca, was in use by French settlers and local peoples. In the late 18th century, the British Army issued tomahawks to their colonial regulars during the American Revolutionary War as a weapon and tool.
Modern use
Tomahawks are useful in camping and bushcraft scenarios. They are mostly used as an alternative to a hatchet, as they are generally lighter and slimmer than hatchets. They often contain other tools in addition to the axe head, such as spikes or hammers.
Modern, non-traditional tomahawks were used by selected units of the US armed forces during the Vietnam War and are referred to as "Vietnam tomahawks" to inflict injury. These modern tomahawks have gained popularity with their reemergence by American Tomahawk Company in the beginning of 2001 and a collaboration with custom knife-maker Ernest Emerson of Emerson Knives, Inc. A similar wood handle Vietnam tomahawk is produced today by Cold Steel.
Many of these modern tomahawks are made of drop forged, differentially heat treated, alloy steel. The differential heat treatment allows for the chopping portion and the spike to be harder than the middle section, allowing for a shock-resistant body with a durable temper.
Tomahawk throwing competitions
Tomahawk throwing is a popular sport among American and Canadian historical reenactment groups, and new martial arts such as Okichitaw have begun to revive tomahawk fighting techniques used during the colonial era. Tomahawks are a category within competitive knife throwing. Today's hand-forged tomahawks are being made by master craftsmen throughout the United States.
Today, there are many events that host tomahawk throwing competitions.
The tomahawk competitions have regulations concerning the type and style of tomahawk used for throwing. There are special throwing tomahawks made for these kinds of competitions. Requirements such as a minimum handle length and a maximum blade edge (usually ) are the most common tomahawk throwing competition rules.
Military application
Tomahawks were used by individual members of the US Army Stryker Brigade in Afghanistan, the 172nd Stryker Brigade Combat Team based at Grafenwöhr (Germany), the 3rd Brigade, 2nd Infantry Division out of Fort Lewis, a reconnaissance platoon in the 2d Squadron 183d Cavalry (116th Infantry Brigade Combat Team) (OIF 2007–2008) and numerous other soldiers. The tomahawk was issued a NATO stock number (4210-01-518-7244) and classified as a "Class 9 rescue kit" as a result of a program called the Rapid Fielding Initiative; it is also included within every Stryker vehicle as the "modular entry tool set". This design enjoyed something of a renaissance with US soldiers in Iraq and Afghanistan as a tool and in use in hand-to-hand combat.
Law enforcement
The tomahawk has gained some respect from members of various law enforcement tactical (i.e. "SWAT") teams. Some companies have seized upon this new popularity and are producing "tactical tomahawks". These SWAT-oriented tools are designed to be both useful and relatively light. Some examples of "tactical tomahawks" include models wherein the shaft is designed as a Pry Bar.
Modern tomahawk fighting
There are not many systems worldwide which teach fighting skills with the axe or a tomahawk to civilians.
In the 20th and 21st century, tomahawks have been prominently featured in films and video games (e.g. Dances with Wolves; Last of the Mohicans; The Patriot; Jonah Hex; Prey; Abraham Lincoln: Vampire Hunter; Bullet to the Head; Red Dead Redemption and its sequel, and Assassin's Creed III), leading to increased interest among the public. Tomahawks are among the weapons used in the Filipino martial art escrima.
Manufacturers
Modern tomahawk manufacturers include:
American Tomahawk Company
RMJ Tactical
Benchmade Knife Company
SOG Specialty Knives
Gerber Legendary Blades
Cold Steel
Winkler Knives
Walk By Faith 777
| Technology | Melee weapons | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.